Bets and Beliefs

I fear that Tyler’s latest post on bets and beliefs will obfuscate more than clarify. Let’s clarify. There are two questions, do portfolios reveal beliefs? Do bets reveal beliefs?

Tyler has argued that portfolios reveal beliefs. This is false. If transaction costs were zero and there were an asset for every possible future state of the world then this would be true. Since transaction costs are not zero and there are many more states of the world than there are assets–even when we combine assets–portfolios do not reveal beliefs. Portfolios might reveal a few coarse beliefs but otherwise no go. Since most people have lots of beliefs about the future but don’t even have a portfolio (beyond human capital) this should be obvious.

Do bets reveal beliefs? Usually but not necessarily. Two people made bets with Noah Smith. Each thought Noah was an idiot for making the bet. Noah, however, had arbitraged so that he couldn’t lose. Clever Noah! Noah’s bets, either alone or in conjunction, did not reveal his beliefs. But is this the usual situation? No.

For the same reasons that portfolios don’t reveal beliefs, high transaction costs and few assets relative to states of the world, it’s going to be difficult to arbitrage all bets. Many bets in effect create a new and unique asset that can’t be easily duplicated and arbitraged away in other markets. I once bet Bryan as to what an expert would answer when asked a particular question. Hard to arbitrage that away.

I also agree with Bryan that the question is empirical and not simply theoretical. When I say that a bet is a tax on bullshit the implication is not just that bullshitters are more likely to lose their bets but also that a tax on bullshit reduces its supply. The betting tax causes people to think more carefully and to be more precise. When people are more careful and precise the quality of communication increases. As Adam Ozimek writes:

In a lot of writing in blogs it is unclear specifically what the writer is trying to say, and they seem to wish to convey an attitude about a certain position without actually having to make a particular criticism of it, or by making a much actual narrower criticism than rhetoric implies…It is useful to have betting because deciding clearly resolvable terms of a bet leads to specific claims…

Tyler argues that under some conditions betting won’t change what people say (under a wide range of portfolios…a matter of indifference… bets won’t be authentic) but Tyler doesn’t give us a specific, testable prediction. The empirical evidence, however, is that small bets do cause people to change what they say. This is one of the reasons why even small-bet, prediction markets work well.

Tyler has his reasons for not liking to bet but if you think one of those reasons is that he has already revealed his beliefs then you are surely not a loyal reader.

But I believe that Tyler’s point has some truth to it and that truth is founded in Bryan Caplan’s previous work on rational irrationality (as Tyler points out). I may make small bets for a variety of reasons and not think carefully about my beliefs because the bets are small. For example, I may make a bet based on a short-term impulsive belief (the Yankees look like they will win tonight) but will only change my investment if I can persuasively state an argument for why my belief should change.

I wonder if all those people without portfolios really do have lots of beliefs. Perhaps someone will say he thinks the price of gold will go up and may even make a small (private with friends) bet to that effect. Yet, if he is an economist, then he will probably believe in the EMH and not actually invest in gold. Clearly, his belief in the EMH is deeper and more ‘true’ than his belief that gold will go up.

Most liquid financial instruments are not specific bets, and are at best approximate macroeconomic proxies. Simple case, if I want to express the belief that American unemployment will fall below 7% next year, how would I do it?

Buy equities? Maybe employment improves but corporate profits still fall. Sell treasuries? The fed could keep rates low even if employment picks up? Buy housing? Yes, but what if I think housing is overpriced even if the economy picks up. Buy oil? Maybe, but what if fracking increases the supply of oil even more than demand.

In short financial markets are very far removed from the type of specific prediction bets that are found in the blogosphere. The most successful traders are not really distinguished at being great forecasters. Most great traders instead excel at identifying positions that are going to perform robustly across a range of outcomes and uncertainties.

One of these is that for many bets, non-monetary factors outweigh the monetary payoff.

For example, suppose I just had the one bet with Brad DeLong, and had not arbitraged. In that case, I would have still been happy to lose, and buy Brad a single dinner, because it would be a lot of fun, and not very expensive. In addition, the mere publicity of the bet was probably worth more than $20 to me in entertainment value!

Now suppose bets could only be made in some minimum unit, say 10% of one’s yearly income, that would be financially meaningful. This would force the monetary payoff to outweigh the non-monetary payoffs. But if I made this big of a bet, I really would look to hedge my financial risk with a countervailing side bet in a real financial or futures market.

And remember that 50% of the time, financial or futures markets will offer better odds than a personal bet.

So in conclusion, I’d say that large bets on outcomes for which financial markets exist (e.g. future inflation) often do not reveal beliefs. And small bets are usually done for the sake of fun; they reveal binary beliefs, but not the strength of those beliefs.

I’m not “having a go” as they say across the pond, but I know what others will find irresistible opportunity when I see one. I don’t make fun of people, but for the right price, I could start, and then stop.

large bets on outcomes for which financial markets exist (e.g. future inflation)

are indistinguishable from (or, perhaps properly, are part of) portfolios, in other words. But surely this gets into quibbling; one favors liquid financial markets for the same reason as favoring bets. The question is what to compare it to. Alex is comparing bets to situations where there are not robust financial markets, and comparing bets to survey answers or voting. Certainly we’d all agree that they add nothing to cases where robust financial markets exist.

In addition, the mere publicity of the bet was probably worth more than $20 to me in entertainment value!

Understandable, but it doesn’t change the point that even more people choose to make silly statements when there are no bets on the line. I think I believe that people like you (and me, I would think too) are rare enough that in the aggregate, it does not hold.

Even at $20 bets, people seem to give more accurate answers, even if done for the sake of fun, than merely assertive claims.

I think the money part of the bet is often only symbolic. The real power of the bet is that it forces people to specify exactly what they mean and the money changing hands has more to do worth there being a clear winner and loser of the bet.

At my work (trading floor) we bet often on all kinds of disagreements. Usually the wagers are a dollar or less. One trader prefers quarters which he tapes to his computer screen with the losers name on it as a sign of his victory. Sometimes the bets are even non monetary. The loser is forced to wear a stupid shirt/tie/hat all day and explain to questioners that he is wearing it because he lost a bet. The key is that there is a clear acknowledgement of the claim being made and whether it was correct or not. I think it is incredibly effective in reducing BS.

Now sometimes bets are arbitrageable, but in those cases you can often bait people into taking a bet that is off market. In that case it does not signify beliefs.

“One of these is that for many bets, non-monetary factors outweigh the monetary payoff”.

Indeed. Except that one would say that’s actually why bet works. I am ill-placed to judge your relative fame/standing to Brad DeLong and/or whether he would agree to a diner with you anyhow even without losing a bet.

But the fact of the matter is that, for me, the viewing public, the audience of both you and Brad DeLong, if you bet a diner out with him and you lose, I will conclude “Aha, on this topic at least, Brad showed himself the superior economist”.

Now you may not care about your reputation as an economist or the fun of an evening out with Brad might more than compensate the loss of reputation in your own eyes… but I somehow doubt too many people are that dis-engaged with their own professional reputation…

As per eccdogg’s comments except that, on trading floors, this turned out into a silly tradition (which was then stamped out, at least where I worked) thus diminishing the shame attached to losing the bet.

Here’s the fundamental issue I have with Prof. Tabarrok’s theory. It commits what I see as the cardinal economist’s sin — it assumes that *everything can be valued solely (or most effectively) in money.* But there are a million other ways which I can demonstrate my belief in, or commitment to, an idea or ideal, many of which are significantly more personally meaningful than money.

The idea that all human endeavors can be reduced to a monetary is a convenient simplifying assumption. But that’s all it is.

‘The idea that all human endeavors can be reduced to a monetary is a convenient simplifying assumption’

Which is why this blog is called Marginal Revolution – your quote describes the essence of it.

‘This moment in the late 19th century marked a turning point in economy theory. With the introduction of these theories, the analysis of production and exchange turned away from social theory and towards the quest of a scientific objectivity. Classical economics had focused on the causal relations among social activities, which were connected with the production and distribution of wealth. Classical economists asked questions about the true basis of value, activities that contributed to national wealth, systems of rights, or about the forms of government under which people grow rich. But in the late 19th century, in response to attacks from socialists and debates about how society works, and as a means to escape the conundrums of value theory and to answer how values could become prices, economists developed the theory of marginalism.’ http://en.wikipedia.org/wiki/Marginalism#The_Marginal_Revolution

Nope, that’s not what it’s assuming at all. It’s assuming that money is *an* effective way of measuring everything, not the *sole* way. In particular, he’s arguing that it’s more useful than simple survey questions (and Bryan Caplan that it’s more effective than voting, or portfolios.)

You are perfectly free to list other ways of getting people to demonstrate their beliefs. Many of those non-monetary ways are indeed very practical, and some are definitely more useful than small bets or portfolios. However, survey questions or voting seem to be less probative than bets.

The weakness of the small bet research is that people can perfectly well know when their own views differ from what the norm or what they think the experimenters believe, and without actually changing their own beliefs desire to win the bet. Quis custodiet and all that. Of course, a rigorous Bayesian updater might say that that’s good enough reason for them to update their own beliefs.

Ah, but “a bet is a tax on bullshit” is much catchier than “a bet is a more reliable indicator of one’s beliefs than a survey.” I don’t disagree with the second, but I doubt that it’s all that meaningful.

A bet IS a tax on bullshit. It’s not the only possible tax. Even if I care about a million things other than money, adding money to the equation, and holding all else equal, incentivizes me further to state my actual beliefs. I may not want to lie on television or mislead others, but also forcing me to bet money on what I say will further incentivize me to reveal my actual beliefs. That doesn’t mean only money matters. You have committed the common anti-economist sin of assuming that because economists believe money is an incentive, they believe it is the only incentive.

I am a disloyal reader, and I am willing to bet that Prof. Cowen’s publicly revealed beliefs are remarkably congruent with those required to continue in several of his current positions, which are unprotected by tenure.

I already quoted the point about the Marginal Revolution above and its rejection of measuring value in terms which reflect broader social concerns, but if Prof. Cowen were to use this web site to extensively support collective worker rights while demanding much higher taxes for the group variously described as ‘job creators’ or the ‘1%’, I am quite confident that the Mercatus Center would have a new General Director in remarkably short order. A fact I am more than willing to bet that Prof. Cowen is very well aware of, in part indicated by the remarkably timid response to the Cato affair at this very web site. ( https://en.wikipedia.org/wiki/Cato_Institute#Shareholder_dispute )

I have no insight into Prof. Cowen’s internal beliefs, of course – which is why I talked about ‘publicly revealed’ ones. But that is all that counts in the world of shaping public policy. Maybe, some day, the real beliefs of Prof. Cowen will become apparent to his loyal readers. The disloyal readers already know that those beliefs do not exist in a vacuum.

Keep in mind I take no pleasure in anyone embarrassing themselves, but the claim that “this mainstream economist should come out in favor of this purely political (Marxist) notion to prove he’s not a shill” is stupid. Take as much time as you need.

All your arguments make the exact opposite points than you are trying to insinuate, but you are impenetrable, so name precise terms.

The Cato control struggle shows exactly how hard it is to control the people you think it must be easy to control. If it were like you think it is, they would not have had to take control of the organization THEY FOUNDED.

If tenure were so cushy, why would they be doing so many side hustles? Because the Kochs want a secret vanity textbook? That’s oxymoronic. A better line of inquiry would be why anyone gives money to universities.

For almost anyone, there is a set of claims about reality or morality that, if they made any of those claims, they could expect to pay a high cost. That might be losing their job, losing friends, having their wife or husband mad at them, not getting invited to parties, getting dirty looks, getting beat up in the parking lot–whatever.

Identifying that set is useful, because it’s a set of things that a given person would probably not say even if they believed them. If Tyler would lose his job immediately upon stating that, say, Barrack Obama is a secret space alien, then Tyler’s silence on the matter of Barrack Obama’s secret space alien-ness tells us nothing about his real beliefs on the matter. That doesn’t mean Tyler secretly believes Obama is a space alien in this example, just that if he did believe that, he would never say so. Depending on the topic and strength of the taboo or amount of offense the wrong people might take, there may even be statements Tyler feels obliged to make that he explicitly doesn’t believe. (Or there may not–I don’t know.) Historically, there have been many examples of people required to claim to believe or not believe various stuff (“I am not now, nor ever have I been, a member of the Communist party….”) in order to hold various jobs.

An example of this is the country-reaffirming expressions from just about every politician in their speeches. The fact that every presidential candidate with a chance of winning an election says in his speeches that the USA is the greatest nation ever conveys no information at all about what those people really think about how great the USA is or isn’t. Saying stuff like that is required to run successfully for president, so even if Obama or Romney thought the USA was nothing all that special, they’d say the same things.

What a stupid comment. A person with beliefs like that would not end up at the Mercatus Center and a person who had a conversion to those beliefs would not want to remain at the Mercatus Center. No one is being held hostage or having a gun held up to their head. Shockingly, free market advocates end up at institutions devoted to researching the free market.

Indeed. He’s trolling for strong reactions by making emotionally charged comparisons and comments. It’s best just to ignore those kind of comments. The very worst thing you can do to a troll is to not respond to them.

Think tanks often have explicit or implicit ideological limits, either because of their mission or to keep from upsetting the donors too much. (Think Jason Richwine.)

One problem this raises: a lot of the people who are paid to think seriously and deeply about big issues in our society are at think tanks. Those people overwhelmingly live within ideological limits, and cross those limits only if they don’t mind losing their jobs. (Even if you think you’d eventually get another job, you still probably don’t want to lose the one you’re in now.) That seems like a pretty bad situation to be in. I don’t know whether Tyler would lose his job or title or whatever if he suddenly decided he was Naomi Klein’s[1] newest disciple, but it’s a very unhealthy thing for us all to have a big fraction of people like Tyler knowing there are places they *don’t dare* go intellectually, even if they’re convinced of them, lest they lose their jobs.

[1] I’m going to go out on a limb and guess that this isn’t very likely to happen anyway.

It’s amazing how rural, red-state politicians tend to be against abortion rights. How convenient for them politically, am I right! Their beliefs are so remarkably congruent with those required to continue in their elected positions!

It’s amazing how urban, blue-state politicians tend to be for abortion rights. How convenient for them politically, am I right! Their beliefs are so remarkably congruent with those required to continue in their elected positions!

I think that Alex, Bryan, Tyler, and Noah are not even really agreeing on what they’re discussing, or what they’re comparing bets and portfolios to, exactly. Alex’s complaint that portfolios don’t reveal *all* beliefs, and that small bets are better than no bets is sailing past Noah’s observation that portfolios are at least equal and generally superior to bets on questions which are addressed in portfolios. (And that large bets are better than small bets, but that sufficiently large bets have to be considered in terms of one’s overall portfolio, due to inevitable hedging.) I think that there’s a broad range of agreement that would be revealed through more careful discussion and definition of terms.

If they attempted to form a bet about the results of research, well, that would be one way of obtaining the sort of careful definition desired here. Not the only way, to be sure, but one way. (Which is part of what Alex is trying to say, I reckon.)

I think portfolios do a great job of revealing beliefs. More specifically, they reveal beliefs weighted by confidence in them. People have diffuse and unspecific portfolios not because of transaction costs (that is, however, why they don’t micromanage them constantly) but because they don’t have a lot of true confidence in their beliefs. General portfolios show that people actually understand that they don’t know all that much about what the future will be.

I’m not even sure what I believe about my portfolio. I think Amazon is a sure thing, but currently have none. Maybe we need all the palm readers to become portfolio readers to tell us what we really think.

The market price is a weighted claim, though. That is, it isn’t a guarantee of a given return but merely a statement of probable returns as compared to the “risk-free” (ha!) rate used as a reference point. If there were no question as to the likelihood of a given return, there would be no question as to the value of the security (unless the reference rate of return changes.) i.e. If security X is expected to produce some discounted cash flow, and you believe it *might* produce said DCF but is more likely to fall short, you can plausibly claim that security X is overvalued. If there is no uncertainty around what X will generate, then there is very little question as to what X is worth and there should be little market fluctuation around that value, and most of it surrounding expectations about the reference “risk-free” rate.

BTW: what is the effective Federal tax rate on lottery winnings? How do state rates of taxation on lottery winnings vary? Why do we hear no prominent and sustained arguments for high tax rates on both 1) lottery winners and 2) lottery players? (Preface to a longer piece on the role lotteries play in wealth [re]distribution: how many lottery winners succeed in transferring their wealth to their offspring? From what little I’ve heard, most lottery winners are no better off a few years on than most former NBA stars.)

Aren’t there some sights that allow you to make bets on outcomes? Both for fun and for money? How do the predictions between the two sights compare? I would think this would tend to test the idea to some degree? Of course, both sides can look at what the other is doing, but still you should either see a significant difference or not.

The relationship between bets (or portfolios) and beliefs is framed in terms of there being a financial exposure of some kind which may or may not reveal a pre-existing belief. But the causation can work the other way round, particularly with positions that are made public. Someone might at first have had no very strong feeling on whether event X would occur or not but then, for whatever reason, takes a public position that X will occur. He will then be ‘on the pro-X team’ and liable to find himself increasingly persuaded that X is bound to occur. This can happen easily: a public intellectual is asked whether X is more likely than not-X and says yes on nothing more than instinct; X is generally reckoned to be very unlikely but the predictor thinks it is in fact 55% likely and says so because this is a striking opinion; someone takes a job because it is the best one available, but its future turns out to be heavily dependant on X happening; and so on. In each case, the person then become associated with other people who expect X to occur, the person hopes for X to occur, the person’s conception of himself as a good predictor of the future becomes bound up in X occurring, etc, etc. (The psychology is similar to the theory that people are less likely to break publicly-made promises.)

Now, in all of those cases, the bet or portfolio decision is likely to be the same throughout, but the degree of belief changes over time (as the predictor becomes increasingly wedded to his prediction). So does the bet or portfolio decision reveal the initial level of belief or the final level?

Another point on the causation of beliefs and bets: being asked to make a bet on X can create a belief that did not exist before. You might have absolutely no opinion on the outcome of some sporting event, but if everyone else is joining in the office sweepstake on it then you might well consider the matter for the first time and make a sincere prediction. So the extent of your beliefs is conditioned by the opportunities for betting on them which are presented to you. Yes, the bet reveals your belief as to the outcome of the sporting event, but not by flushing out your insincere posturing about it: the creation of the asset class in the first place caused your belief. Betting taxes ignorance as well as bullshit.

It may be that different bets in one’s portfolio cancel either partially or completely, as in Noah’s example. But for the reasons that Alex cites, transaction costs and an incomplete set of market instruments, perfect hedging and arbitrage will be the exception rather than the rule. Even though Noah is not taxed, one of his two trading partners must have gotten the odds wrong and and made a poor bet. Otherwise, Noah could not make a guaranteed profit.
The person who made the poor bet, will, over many such trades, lose money, and thus “pay a tax” for mis-estimating probabilities. (It’s possible that Noah’s partners also did their own arbitrages with someone else, but, aside from highly contrived and unlikely situations, the chain must end somewhere with someone making a straight unhedged bet.)

Sort-of. You need to consider what you’re hedging against, what your expectations of income and such are, etc., as well. An 80 year old man and a 20 year old man with the same amount to invest and identical beliefs about the future should probably have pretty different portfolios, for example.

They probably should but that doesn’t in any way change the fact that people who bet using the wrong odds and who are not fully hedged are going to lose on average. That’s all that Alex is claiming, and he’s right about that.

a. There is a vast amount of bullshit spread by people who are alleged to be smart people, and who are given megaphones at every opportunity.

b. There are usually no negative consequences of that bullshit. Pundits are famously awful at predicting anything.

So, what I think we wish we could get is a way to get smart people to make statements that they really believe, as opposed to statements that are fun to make, or that benefit their side of some dispute, or that are just whatever happened to pop into their head with an hour before their deadline. And we’d like them to somehow signal to us “Here, this is something I mean you to take seriously.” But we don’t want that signal to be used falsely, the way many columnists and politicians and some academics will use language or dress or associations with powerful institutions to signal seriousness while blowing smoke or scoring partisan points. We want there to be a cost to blowing smoke, so they will have an incentive to only use that signal when they mean “I really am saying what I believe here, and I think I know what I’m talking about[1].”

Now, a bet is probably not usually a great mechanism for that, for a lot of reasons Alex and Noah mentioned. But it’s still *something*.

I see two other advantages of betting outside the “tax on bullshit” advantage:

a. It requires a specific testable prediction. Pundits love to make broad sweeping claims that really be nailed down, like “Obama is leading us down the slippery slope to socialism.” This may be right or wrong, but how am I going to even tell? Change that to a claim about, say, the government budget as a fraction of GDP in Obama’s last year in office, and at least we can all be clear what we’re talking about.

b. It requires finding a real, live human making a contrary claim, typically some other pundit that the bettor considers serious. That provides some protection against strawmanning the opposition–you have to find someone to take the other side of the bet.

[1] For an academic, one way to get this is to look at what they publish professionally. Tyler might throw something together half-baked for a blog post, but he probably won’t submit something he thinks is half-baked to a journal. But that’s pretty low bandwidth.

These are unusual propositions to settle on. They make me wonder if I’m the only one who values quality arguments over belief.

Suppose the imagineers produced sophisticated animatronic Cowens and Tabarroks, indistinguishable from the originals, but programmed to parrot their arguments.

These zombiefied speakers have no beliefs whatsoever. For the sake of argument, let’s assume the “real” parties to this debate hold beliefs, and are not animatronic impostors. Now, hearing the automatons speak on the subject seems identical to the experience of hearing the original articles.

Do we really care if these two indistinguishable experiences produce similar impacts on a listener? Are those listening to the animatronics somehow conned, relative to the content of the arguments?

If given an argument, you have two options: attack the validity or attack the premises. Attacking the motives of the speaker doesn’t address either. If a known liar presents you with a valid argument and verifiably true premises, the conclusion still holds. If a saint produces only flimsy and fallacious arguments for some idea, proof of their committed belief is no proof at all.

I’m not worried about this casting me adrift in a sea of bullshit, because I believe the audience has an obligation to actively participate in the evaluation of any communicated arguments or ideas. If this were not the case, why bother learning about fallacies or reasoning at all?

In an elementary logic textbook, arguments proceed from a set of verifiably true or false premises through a chain of deductive reasoning to a conclusion. Real-world arguments almost never look anything like that (so much so that I think we do students a disservice by concentrating so heavily deductive logic in “critical thinking” classes). In the real world a listener may not have the wherewithal to evaluate the truth of premises. Worse, you can’t assume the irrelevance of facts not included in the premises because even simple arguments rely heavily on context. Often the premises and conclusions alike are stated in vague language that makes it hard to say what is even being claimed, let alone verify it. Furthermore, real-world arguments are often inductive rather than deductive, and that makes checking the validity of the reasoning a judgment call. In other words, real-world arguments are a lot more complex than the caricatures of arguments you see in textbooks. It’s hopelessly naive to suggest that you can either “attack the validity or attack the premises” because in most cases you don’t really have the tools you need to do either. Nevertheless, we do form opinions in response to informally structured arguments. Mostly, we use heuristics, such as the expertise or trustworthiness of the speaker. The reputational feedback from betting improves our estimate of the former, and the commitment demonstrated by “putting your money where your mouth is” gives an indicator of the latter. Moreover, adding a tangible cost to bad reasoning tends to drive it out of the discourse, improving the quality, on average, of what’s left.