The availability heuristic is a bias that arises when we confuse probability with ease of recall. This means that without noticing it, we are actually answering a completely different question than the original one. Instead of answering "how likely is this?" we answer "how easily did this come to mind?". If our experiences about the world would be uniformly and randomly distributed – and covering all possibilities – only then would ease of recall be the same as probability. Of course, this is not the case. With modern media, private experience is not the only source for our thoughts; we read newspapers, blogs, watch TV and consume all kinds of media that tell us what did, could have, or should have happened.

My favourite example of the availability heuristic is related to travelling. Imagine that you’re finally getting to that long-awaited holiday on a paradise island. Your friend drops you off at the airport. As you gather your suitcase and are just about to leave, your friend shouts to you “Have a safe flight!” You say thanks, and proceed to check-in.

Why this is a good illustration of the availability heuristic is the fact that you – the one getting on the plane – is being reminded to have a safe trip. Whereas in fact, if you look at the numbers, driving a car is actually much more likely to be fatal than flying! The discrepancy is huge: statistical estimates say that it’s around two to five times more likely to have an accident on the way home from the airport than on the flight, though the exact numbers depend a lot on the assumptions of who's driving where. So when we consider the safety of the car versus the plane, it’s very easy to remember examples of planes crashing, or even disappearing altogether. Whereas car accidents are so common that they rarely break the national news barrier.

So it’s not just that availability is a poor guide to probability. In the case of mass media, availability is actually inversely proportional to the probability! After all, papers want to report new, exciting things, and not just car accidents that happen every day. This essentially means that “oh, I saw an article about this in the paper” is not a good guide to the world of things to come.

If you’re a nitpicker (I know I am, so there’s no shame to admitting) you might say that saying safe trip is really not a probability estimation claim. When you say “have a safe trip”, you’re not trying to state that “I believe your mode of travels is statistically more likely to result in death or injury, and I aim to prevent a part of that by this utterance”. No, of course not. Even an economist wouldn’t claim such a thing! It’s more a statement of wishing your friend well, and hoping for a good trip. But still, I find it funny that we use the word “safe” here, in exactly the wrong place.

Everyone knows Daniel Kahneman’s Thinking Fast and Slow. But if you’ve already read that, or are otherwise familiar enough for it to have low marginal benefit, then what could you study to deepen knowledge about decisions? Well, here are a few sources that I’ve found beneficial. To find more, you can check out my resources page!

TED talks

In the modern world, we’re all busy. So if you don’t want to invest tens of hours into books, but just want a quick glimpse with some food for thought, there are of course a few good TED talks around. For example:

The only well-known scholar so far discussing choice from a multicultural context. Do we all perceive alternatives similarly? Does more choice mean more happiness? With intriguing experiments, Iyengar shows that the answer is: it depends. It depends on the kind of culture you’re from.

Gigerenzer is known as one of the main antagonists of Kahneman. In this talk, he discusses some heuristics and how in his opinion they’re more rational than the classical rationality which we often consider to be the optimal case.

Dan Ariely is a ridiculously funny presenter. For that entertainment value alone, the talk is well worth watching. Additionally, he shows nicely how defaults influence our decisions, and how a complex choice case makes it harder to overcome the status quo bias.

Books

Even though TED talks are inspiring, nothing beats a proper book! With all their data and sources to dig deeper, any of these books is a good starting point for an inquiry into decisions.

For a long time, I was annoyed there doesn’t seem to be a good, non-technical introduction into the field of decision making. Kahneman’s book was too long and focused on his own research. Then I came across this beauty. In just a little over 300 pages, Hastie & Dawes go through all the major findings in behavioral decision making, and also throw in a lesson or two about causality and values. Definitely worth a read if you haven’t gotten into decision making before. And even if you have, because then you’ll be able to skim some parts and concentrate on the nuggets most useful for you.

Talking about short books – this is not one of them. This is THE book in the field of decision making. A comprehensive edition with over 500 pages, it covers all the major topics: probability, rationality, normative theory, biases, descriptive theory, risk, moral judgment. Of course, there’s much, much more to any of the topics included, but for an overview this book does an excellent job. It’s no secret that this book sits only a meter away from my desk, that’s how often I tend to read it.

This book may be 10 years old, but it’s still relevant today. Stanovich describes beautifully the theory of cognitive science around decisions, Systems 1 and 2 and so on. He proceeds to connect this to Dawkinsian gene/meme theory, resulting in a guide to meaning in the scientific and Darwinian era.

Discussions around decision making often tend to lead to the question “How can I leverage this in my own life?” Unfortunately, behavioral results are not the easiest to apply in the everyday. Sure, knowing about biases is good, especially when you’re making that big decision. But in all fairness, loss aversion or the representativeness heuristic are not usually the biggest worries.

For me, personally, the biggest worries revolve around one question: Is this really worth it? And no, I don’t mean that my mode of being is an existential crisis. What I mean is that I often find myself asking whether this particular activity is worth my investment of time and energy. This meta-level monitoring function is a direct result of the two following concepts.

Opportunity cost

If you’ve studies economics or business, you’ve surely heard of this. If you haven’t – well, you might be missing one important hammer in the toolbox of good thinking. As a concept, opportunity cost is really simple. The opportunity cost of any product, device or activity is what you don’t get instead. For example, if I go to the gym for an hour, I’m giving up the chance to watch an episode of House, for example. Of course, there are all kinds of activities one is giving up for that hour, but ultimately what matters is the best opportunity given up – that’s the opportunity cost.

You're giving up WHAT to read this?!

Why I consider this to be important is that it’s the ultimate foundation for optimization. When one thinks about activities in terms of opportunity costs, it makes concrete the constraint that we all experience: time. No matter how rich or powerful you are, there’s always going to be that nagging limit of 24 hours a day. So it pays to think about whether something is really worth your precious time.

Marginal benefit

Marginal benefit (or utility) is also quite simple. The marginal benefit of something is the benefit you get by consuming an extra unit of that good. For example, at the moment of writing this, the marginal benefit of a hamburger would be quite high, since at the moment I’m pretty hungry. What’s important is that the marginal benefit changes over time – it’s never constant. One burger is good, and two maybe even better, but add more and more burgers on my desk and I’ll hardly be any happier. In fact, anything over three burgers is a cost to me, since I can’t possibly eat all that – I’ll just have to carry them to the garbage!

Please, no more burgers!

What makes marginal benefit powerful is the idea that even though I’m enjoying something, it doesn’t mean I should take in all that I can. A night out is great fun, but perhaps after a few pints the marginal benefit often plummets quite fast – you can try this by staying in the bar for extra two hours next time. Just remember to evaluate the situation next morning! ;)These two concepts help you to ask two things. How much are you getting out of this? What could you get instead? And if the answer is that there’s something more you want instead –well, that’s a wonderful result! At least now you know what you want! :) Or, well, until the marginal benefit decreases, at least…

I figured it would probably take a bit more than an hour to write up this post – reality intervened and two hours were exceeded. This is probably an experience many people share, whether from school, work, or personal life. Everything takes much longer than expected. Even though I’ve written countless essays, blog posts and papers – and they’ve all seemed to take commonly longer than estimated – I still didn’t see this coming. There’s a term in decision research for this phenomenon: the planning fallacy. In short, it means we are quite idiotic planners: we never see the delays and problems coming.

The planning fallacy is ubiquitous to the point of being scary. As Jonathan Baron recounts in his exquisitely fantastic book Thinking and Deciding (pp. 387-388):

Projects of all sorts, from massive works of construction to students’ papers to professors’ textbooks, are rarely completed in the amount of time originally estimated. Cost overruns are much more common than underruns (to the point where my spelling checker thinks that only the former is an English word). In 1957, for example, the Sydney (Australia) Opera House was predicted to cost $17 million and to be completed in 1963. A scaled-down version opened in 1973, at a cost of $102 million!

When it comes to our predictions of completion times, there seems to be no such thing as learning from history. It seems that every project we begin is considered only as a single instance, not as belonging to a category.

In fact, herein lies one of the keys to overcoming the problem. Relating a single project to a reference class of projects encourages statistical, data-driven thinking and reveals the delays to us. In an experiment (Buehler et al., 1994), students were much better at predicting when they were going to complete assignments when they were told to think about the past and connect it to the present assignment.

In my opinion, there are two kinds of errors underlying the planning fallacy:

1)Thinking we will have more time or energy in the future 2)Thinking we have prevented the errors that have happened before

The first case is especially related to personal projects. All the time there are several projects demanding attention. When we predict the future, we tend to believe that we will have more free time in the future than now. Since all those meetings, unpredictable surprises and events we just want to be at our not yet in view, the future looks promisingly empty. However, once it arrives it is very likely to look much like the present: the Sunday afternoon I was supposed to use for fixing my bike and catch up on reading was spent over a delightful brunch that we agreed on just a few days in advance. And so on. It never turns out like you planned.

Plan vs. reality

In the second case – and I think this might be more relevant in complex projects – we believe that we have in fact learned from history, that we’ve plugged the gaps that proved our downfall in previous projects. Unfortunately, this is exactly what we thought previously, too! Surprising problems in project are surprising exactly because they are not the same errors as before. As before, just round the corner is something we didn’t think of, something that catches us off guard. If we only remembered that it’s always been like this…

Of course, I’m deliberately painting a gloomy picture here. We do learn from our mistakes and tend not to make the same mistakes again. But the point is that there will probably always be new mistakes – and this ought to be reflected in planning. I don’t think anybody is better off from the fact that projects run over their deadlines almost certainly. For example, by some estimates only 1% of military purchases are delivered on time and budget (Griffin & Buehler, 1999)! There’s certainly room for improvement!

And that improvement is possible. According to Buehler et al. (1994), others are less susceptible to the planning fallacy than the actors themselves. In simple terms: those not invested in the plan are more likely to have a conservative view of the completion time. Unfortunately, they are also likely to err in the opposite direction by stating too slow completion times! So, it’s probably a feasible idea to estimate the actual time by averaging your estimate and the outsider’s estimate. This is hardly optimal, but it’s better than relying on a single biased estimate.

So the recipe for better plans seems quite straightforward:

-stopping to think -considering past projects and their results -getting an outsider opinion and aggregating estimates

The framing effect is probably one of the best known – and also one of the most interesting – biases due to its generality, hence today’s topic. Let’s start with a classic example from a classic paper:

Imagine that the United States is preparing for the outbreak of an unusual Asian disease that is expected to kill 600 people. Two alternative programs to combat the disease have been proposed. Assume that the exact scientific estimates of the consequences of the programs are as follows:

If Program A is adopted, 200 people will be saved.

If Program B is adopted, there is a one-third probability that 600 people will be saved and a two-thirds probability that no people will be saved.

Which of the two programs would you favor?

As a decision matrix, the situation looks like this:

As it has been formulated, there is obviously no correct answer to the question – the two options are statistically equal. What framing is about is that the way the situation is described influences our decision. If we formulate the question in terms of dead people (with the same cover story):

The formulations, as one can from the tables see, are equivalent. The surprise is that people made different choices in these situations. In the first case, 72 % chose plan A, 28 % chose plan B. With the second formulation, however, only 22 % chose plan C (equivalent to A) and 78 % opted for plan D! If framing had no power over us, we would choose the same option in both cases. So it’s not that choosing A or B per se would be irrational, it’s that making a different choice just because of framing is not rational. The classic example is not a very natural example, however. I certainly hope I will never come across a similar situation! Thankfully, there are also more down-to-earth examples about framing. For example, suppose you are looking for some new dinnerware to buy. Visiting a flea market, you find a nice set of 8 dinner plates, 8 soup bowls and 8 dessert plates. You consider that the set is worth about 32 dollars.

As you’re just about to close the sale, the owner of the dinnerware suddenly remembers that “Oh! I just remembered! I also have some tea cups and saucers for the set!”. She adds 8 cups and 8 saucers to the set. Inspecting them, you notice that 2 of the cups and seven of the saucers are broken. How much are you willing to pay now? Now, rationally, the set if of course worth more: after all, you get an intact saucer and 7 teacups on top of what you had before. At least it cannot be worth less – you could just throw away the additional pieces (let’s assume no costs are imposed on you by getting or disposing the broken pieces). In fact, what happened in the experiment in Hsee (1998) was the following. Those who did joint evaluation, ie. they saw both sets (with and without broken pieces) reasoned just as we did above. The set including broken items was worth a little more. In contrast, those doing separate evaluation, ie. seeing only one of the sets, considered the second set to be worth less! In their mind, they compared it to a completely intact set, and thinking “oh, but this has broken items”. Those seeing the smaller, but completely intact set, reasoned “ah, it’s all intact and therefore good”. So a different frame generated a different evaluation of the intact pieces’ worth! You could argue that the separate evaluators were doing their best – they didn’t know about the option of a similar set with additional pieces. And of course, that is correct. However – and this is why framing is such a sneaky bias – real life consists mainly of separate evaluations. In a store you just get to see that item with some strategically chosen comparison items next to it. When evaluating a business project, you’re mostly stuck with the description that the manager offers. The only advice I can give about framing is that awareness matters. For example, I’ve come across situations at work when someone is asking me to do a small thing, and I’m thinking if I ought to do it now, or perhaps later. What has helped me to think is recognizing that the simple now/later is just one decisions frame. Often, I felt it’s better to back up to a wider frame and ask myself what I should be focusing in the first place. Sometimes, it turns out that I ought to do something that’s much more vital than the request. And on other occasions, when there are no other critical tasks, it’s perhaps just better to get it done right away. So, even if I’m repeating myself a bit from last week, it’s a good idea to think about the alternatives at hand – and then question them. Are these really the alternatives? Is there a wider frame with other options? And is the description of the alternatives the only and the most relevant one? So life is not exatly “What You See is What You Get”. It’s more exactly “What You See is What You Think You’re Getting”. Reminds me of this movie (and notice that Neo didn’t really reflect much on the frame he was given):

You wander around at the store and see a nice looking pair of speakers. You plug in you iPhone and test them out, rocking it out to your favorite tunes. The speakers are very enticing already, but you decide to test out the next, slightly more expensive set just to be sure. Comparing the sounds, the more expensive set sounds just so much clearer, with better bass punch, too… Oh, it’s just so much better! Walking out from the store with a pair of speakers that are way better than your needs, you’ve just exhibited a prime example of context-dependent preferences. In its simplicity, this bias may sound like an old truth. Sure, our preferences are changed by the context, so what? Unfortunately, in its simplicity lies the problem: this bias has the potential to affect us in almost any situation involving comparisons. And in the modern information era - with comparisons just a few taps away – well, that’s just about any situation. So what’s the bias? To be concise, the point is that choices are affected by changing the choice set, for example by adding new irrelevant alternatives. In effect, this can mean than whereas you this time preferred the high-grade speakers, adding a third middle option might have pushed you to choose the most cheapest, lowest quality ones instead. I’ll explain the theory with the help of a few images, borrowed from Tversky’s and Simonson’s paper.

If you look at the figure above, it shows three products that are quite different. Product Z is high in quality (attribute 2) but unattractive due to high price (ie. low on “affordability”, attribute 1). Product X, on the other hand, is very cheap but low quality. Product Y is somewhere in between. The worrying part in the context-dependency is that our choices between options can be largely influenced by adding or removing options. For example, if we start with products X and Z and then add Y, by strategically placing it our choices can be heavily influenced. If Y is placed like in the next figure, a large proportion of decision makers will tend to switch to preferring Z, despite preferring X when they just had a choice between X and Z. Let’s look at a figure that shows this better.

The reason for the bias is that quality (attribute 2) now seems much more important after seeing that Y has a lot of that, too. X, on the other hand, is still cheap but looks much worse in terms of quality. After all, you don't want to get the lowest quality option. Depending on the setup, this will either lead to picking Y (which is not a bias, since you couldn't choose that initially) or picking X (which is, if you preferred Z initially). If you want to see the equations that clarify how the placement logic works, they are in the Tversky and Simonson’s paper. In addition, the same effect with a different example is very nicely explained here in Dan Ariely's lecture. The gist of the issue is this: context-dependency means that with some set of options, we would choose X over Z, whereas a change in the option set – for example adding Y – may nudge us to choose Z instead. What you see influences you heavily. So what’s the problem in preferring X in some situation and Z in another? Well, the problem is twofold. First of all, if our choices are affected by options we don’t even pick in the end – so they should be irrelevant – it can be argued that our sense of what we actually want is problematically limited. Admittedly, this is a big concern in its own right. A bigger issue is the fact that often we don’t get to pick the options we see. What this means is that our choices can be influenced by marketers and other people who have the power to set up the choice situation. Thankfully, I think there’s a remedy. Contextual choice works in both ways, so you can use it to your advantage, too. When considering what you’d prefer, you can play out the situation by creating different alternatives – even irrelevant ones –and reconsider your choice. This kind of thinking will not only make you less susceptible to choosing on a whim as you consider things more carefully, thinking about other alternatives may give ideas on what’s actually possible in the situation and what you actually value.

The status quo bias is in short the tendency to continue with the current state of affairs, despite being given an option to pick something better. Let’s illustrate it with an example:

At least here in Finland, one has the opportunity of comparing different electricity service providers. The quality of the product is the same, but the price is different. Now, if a consumer- let’s call him John – chooses to remain with his current provider despite knowing that there are equally good and cheaper options, he is epitomizing the status quo bias: overly preferring the current state.

The example above is in reality often more due to just not wanting to expend effort into comparing providers and so on. However, what’s especially worrying about the status quo bias is that it gets even worse: it affects our judgments of what is good in the first place. The existence of better alternatives is what marks the difference between a rational status quo choice and an irrational bias. Consider the case of comparing electricity providers in the US. A study was done with two groups of people: one group with reliable, more expensive service, and the other with cheaper, less reliable service. Both groups were given six options of electricity plans to choose from: their current status quo and five alternatives. Each alternative had a service level and price defined relative to their current plan, to make the options realistic. Even though both groups were earning comparable amounts, there was a huge difference in their choices. In fact, almost 60 % of respondents – in both cases – chose their own current status quo! It appears that their choices were driven only by their current plan, and as the groups had very different plans, they ended up with very different preferences. Why is this case an example of a bias? Well, if we consider that reliability-preferring and price-preferring customers are demographically equal (as they were in the experiment) it is nonsensical that there would be two optimal price-service combinations, and that the current plans would be just those. In contrast, it seems the customers‘ preferences werecaused by the status quo. What makes status quo bias difficult to notice is that sometimes, in fact, the current state might be good enough. As we have limited energy for thinking, it’s best to focus on improving things going badly. But – as the bias shows us – what we think is “good enough” could in many cases be improved with little effort. Sure, in the case of electricity the stakes might not be very high, but there are cases with much more import. Consider long-term saving, for example. You probably have some portfolio of investments like shares, managed investment funds, index funds and so on. How good is your portfolio, compared to others? The general answer – “probably good enough” – is understandable. However, it just highlights the status quo bias, sticking to the current state even despite having options. And in this case, the gains can be immense. Having 1 percentage points better return over a 10 000 euro initial investment makes for an almost 7000 euros difference in 25 years! At this point, unfortunately I don’t know any good magic tricks to avoid status quo bias. But in my opinion noticing the problem is the first step to success. So in that spirit, the noticing phase might look like this:

Am I preferring the current state?

What other alternatives are there?

Is the current state really better than all the other options?

The trick here is to generate alternatives first, because by comparing with those it’s easier to notice the bias. If you go from step 1 to straight to asking whether the status quo is good enough, it’s just too easy to answer “yeah, probably so”. If it really is, at worst you’ll lose a few minutes by thinking about the alternatives. If it isn’t, at least you’ll find out!