P1: Reading Keynes

I am planning a sequence of posts on re-reading Keynes, where I will try to go through the General Theory. This first post explains my motivations for re-reading Keynes. As always, my primary motive is self-education; this will force me to go through the book again — I first read it in my first year graduate course on Macroeconomics at Stanford in 1975, when our teacher Duncan Foley was having doubts about modern macro theories, and decided to go back to the original sources. At the time, I could not understand it at all, and resorted to secondary sources, mainly Leijonhufvud, to make sense of it. Secondarily, i hope to be able to summarize Keynes’ insights to make them relevant and useful to a contemporary audience. Thirdly, there are many experts, especially Paul Davidson, on this blog, who will be able to prevent me from making serious mistakes in interpretation.

Reasons for Studying Keynes

“The heart has its reasons of which reason knows nothing.” Blaise Pascal

In line with the objectives of the WEA Pedagogy Blog, I am initiating a study group with the aim of [re-]reading Keynes’ classic The General Theory of Employment, Interest and Money. There are many reasons why I think this is a worthwhile enterprise. I hope to make weekly posts summarizing various aspects of the book, as we slog through the work, which can be difficult going in some parts. At the very least, this will force me to re-read Keynes, something I have been meaning to do for a long time. In this first post, I would like to explain my motivation in doing this exercise. Read More

Share this:

I applaud the idea but please also analyse Minsky’s “John Maynard Keynes”. Outside of Keynes, I think that Minsky understood the effects of financial markets and speculation bubbles better than any other economist. The 2077/8 sub prime crises was a real Minsky moment.

A very good idea, IMHO. From what I read online, it seems that very few of today’s economists and commentators have any idea what Keynes really said. There is great enthusiasm for pontificating that, “Keynes was completely wrong”, “Keynes knew nothing about economics”, “Keynes’ ideas are completely out of date”, etc. I find it hard to believe that any of those statements can be true.

I am not a mathematician, neither an economist man. Only a software maker. That is why I wrote a book about macroeconomics on the same way as inductive analysis as I did for any firm which use to make products and selling them. So I wondered where are the big data, how they run inside the firm from them to the products in a dynamic system. So I finished the graphic system, by chance, just a little time ago reading the French book of J.M. Keynes which was one of the books that my wife used to study in her youth university time. Unhappily it is an old translation not very easy to learn. Anyway, I was amazed to know that this “Theorie générale de l’emploi, l’intérêt, de la monnnaie » was almost the same theory as I wrote ago. Mainly about the “demand and offer strategy” which is the psychological and dynamic issue that I found in my analysis main macro system as a people choice at every purchase. Tis is the main system that J.M. Keynes said. My idea is that the purchases had to be known as vectors of values, of course multiplied millions times at each moment by the people. This big currencies are turning exactly as a cycle. That is the reason why the title of my book is not a theory, but only an explanation called “Money How to flip the table off”. How cycles of dynamic currencies are turning in live as a cyclist man does with the wheels of its cycle maintaining its own almost equilibrist way, is the same way as an economy of a country , but which sometimes falls. Here is the French “Avant-propos” of my book.

make sure you understand chapters 17 and 19 of the General Theory. Chapter 19 demonstrates that even if money wages and prices are perfectly flexible involuntary unemployment can occur. Accordingly, Samuelson’s Neoclassical Synthesis Keynesianism and New Keynesianism are not Keynes because these latter two argue that it is the stickiness of wages and prices that are the cause of unemployment
Chapter 17 entitled The Essential Properties of Interest and Money explain that these essential properties for all liquid assets — namely [1] the elasticity of production is zero so that when people put their savings into liquid assets they are spending that portion of their income on nonproducibles, and [2] the elasticity of substitution is zero, so that producible durable goods such as capital goods are not good substitutes for all or any liquid assets. Thus, as Hahn has demonstrated , when savings find “resting places” in nonproducibles there will be unemployment even if wages and all prices are perfectly flexible. This story is spelled out in great detail in my POST KEYNESIAN THEORY AND POLICY book..

Most people read GT without any appreciation of Keynes’ Treatise on Probability. This is a difficult read, but the main point seems to be that there is more to uncertainty than just conventional numeric probability. I have seen it claimed that Keynes was converted to mainstream Bayesianism before he wrote the GT, but I am unconvinced. There are thus two versions of the GT to consider: as if ‘radical’ uncertainty is or is not important. If not, I find it hard to see how the GT could be useful today. So I would like to see a modern interpretation of the GT as if uncertainty matters. Does this make sense?

David:
You should read my latest rejoinder to Rosser and O’Donnell et. alpublished in the latest issue of the JOURNAL OF POST KEYNEIAN ECONNOMICS vol. 39, issue 3. where I argue that Keynes’s uncertainty involved fundamental uncertainty because of ontological reasoning that the future was not predetermined but could be created by decisions made today,
O’Donnell, on the other hand, argues that for Keynes uncertainty was an epistemological problem — because humans did not have the capacity to “Know” the future even if the future was already predetermined.

By the way Keynes claimed the idea of equal probabilities lead to false statements– so much for Keynes becoming a Bayesian!

I admit I am either confused or skeptical that one can distinguish between fundamental ontological uncertainty vs. epistemological uncertainty. It seems to be debate analogous to ones between people who think they have free, versus those who think people dont have free will. I think these basically opinions, or conventions, which can’t be proven true or false. (The same issue seems also arise in quantum theory—-some people thinks its determinstic (eg bohemian hidden variable theory) while others that’s it fundamentally stochastic.

the phrase ‘the idea of equal probabilities lead to false statements” I don’t understand (nor how it relates to baysianism — some philosophy based on rearrangements of the basic definitions of conditional probability. In a sense since no process can go back in time (at least by standard views of causality) no probabilities are equal—if you flip a coin, things have changed, so the probabilities for the next flip have changed. But in a different sense, the probabilities are unchanged (at least empirically). So false statement don’t seem to follow from them—you have the same statements.

(One can also remember the ‘common cause’ theory of quantum theory/physics—-the idea that even the experiments done, and theories made by physicists, actually stem from the same common cause as whatever results they get. They can actually be viewed as just part of another experiment done by some grand ‘intelligent designer’.)

Paul Davidson

November 22, 2016 at 6:06 pm

Bayesian theory in essence argues everything can be reduced to equal probabilities in the absence of other evidence to the contrary. For example to the question “Is there cows on Mars?” the probability is either there are cows on Mars or there aren’t.
Then we may ask a Bayesian : If there are cow on Mars are they red? The answer is either yes or no. . so there are equal probabilities to the answer yes and the answer no but the total probability of asking these questions separately is 1/4.
. But what is the probability if you ask a Bayesian the single question “Are there red cows on Mars?”

this is a sort of ‘excercize’ (possibly wrong) but bayesianism operates on the idea (which is just a revision or frequentism—ie higher-order markov process (see N G Van Kampen) that ‘if i went around the universe and chekced out how many red cows there are i’d see N red cows and M non- red cows, so the probaility of red cows P(R) = N/(M+N) Then i make a sample of planets, so some X planets have cows, and Y others dont. so P(cow) =X/(X+Y) Bayes says you ‘update this’ (or your ‘prior;). so P(R,Mars)= P(R)P(cow). We dont know how many red cows are around or which planets have em. We just have a sample. But there are 4 choices. Sum of probability = 1.

are there red cows on mars? that’s P(R)P(cows on planets).

my experience is most cows are red and almost all planets have cows. so P(R)P(cows on planets) ==.99999..*.999999.. = 7 (i can multiply good).

Paul, I have had a quick scan through your rejoinder. Please forgive me if I have misinterpreted anything, but as a mathematician I don’t think any amount of careful study will ever enable me to totally understand economics-ese.

My main point is that you don’t refer to Keynes’ Treatise. He does refer to it in a footnote to his GT, so I think it reasonable to use it to help interpret his economics. This would very easily dispose of many of your critics’ points.

You also don’t refer to ‘Black Swans’. Technically, Taleb’s notion of a Black Swan seems a lot like O’Donnells notion of an ergodic process that doesn’t give rise to trustworthy statistics. I agree with you that this is not what Keynes was talking about, and that 2008 wasn’t that kind of Black Swan.

In the Treatise Keynes was concerned to develop a theory of knowledge that would count knowledge of eclipses as ‘proper knowledge’ while highlighting the flaws in classical econometric ‘knowledge’. Thus the issues that you are discussing go well beyond economics.

Finally, I am not familiar with your Yaglov and Wold sources, but Turing (who cites Keynes’ Treatise) had showed how dynamic stochastic processes have ‘critical instabilities’. It is as if we are watching a possibly biased coin being tossed and using statistics to forecast the frequency of Heads. According to Keynes (unlike you) this is reasonable in the short term. But we cant expect our forecast to hold way into the future. What we need to do is to be able to determine the forecasting horizon. E.g., how soon could someone change the coin without our noticing? Whitehead, prior to the GT, also writes on this, although even more obscurely.

Thus where I disagree with you is that the mainstream approach to statistics is valuable and should continue to be done: the need is to understand the limitations of the mainstream and to develop complementary methods for anticipating and surviving through ‘critical instabilities’. Does this make sense?

Paul, there are many varieties of Bayesianism. Your cows example assumes a version that uses an extreme version of what Keynes called ‘the principle of indifference’. Keynes was not so extreme. I note that Keynes’ developed a ‘constructive theory’ which Russell supported and which Good regarded as the source of ‘modern Bayesianism’. As a mathematician, it seems to me that there are mathematically valid versions of Bayesianism practiced by good mathematicians. The problem is to understand when such methods are inadequate and to develop and adopt better ones, such as those of Keynes and Good. The main thing is to realise that in so far as it is valid, Bayesianism is mathematics, not dogma. In so far as it is a dogma it is neither mathematics nor valid.

My criticism of mathematicians is that they have failed to explain the difference between mathematical and dogmatic Bayesianism. I have a blog at djmarsay.wordpress.com that has ample material, but I do not yet have anything that seems effective at explaining this issue. What you have is as good as it gets but I find the cows distracting.

bayes’ rule is P(A/B)= (P(B/A)P(B))/P(A) . since P(A/B)=P(A,B)/P(B) this reduces to P(A,B)=P(B,A).

That is false for time dependent processes in general.

William Bialek (Yale) pointed this out too in some paper on pattern recognition via neural networks —its a trivial theorem or identity from elementary probability. (If you have P(A/B, t) then it gets complex. then (a,b)=/(b,a) so its non-commutative like quantum theory. Bayesians leave this unstated assumption out–but they incorporate it via their updating rule.They do it without stating it.

s gelman (columbia—same place as J Sachs, and other neoliberals https://andrewgelman.com andrewgelman.com (i tend to agree with him but not the bayes label)) , cosma shalizi (CMU) call themsleves bayesians, and some or all consider Jaynes a Bayesian bayes.wustl.edu or https:/bayes.wustl.edu i think its a tempest in a teapot.

while not economics, i think this sort of clarifies the issues except its written at a very technical level https:/arxiv.org/abs/cond-mat/0409511 i was thinking of writing a simpler version of this applied to econ but i’m thinking maybe i’ll leave that chore to others. why bother.

https://arxiv.org/abs/cond-mat/0409511 (it would be interesting to see how this has held up—it has 62 citations but i can’t read all of them). goes through the entire non/erogodicity non/stationarity thing via vlasov equation which i view as a variant of fokker-planck/boltzmann equations (fokker-planck of course is equivalent to black-scholes equation via change of variables) but you have to make diffusion and drift coefficients nonlinear —nobody can solve that, so you bring in renormalization group theory (Kadanoff-Wilson). Some say now you can do it with pencil and paper too—turn a million page equation into a single diagram on one page. I’m not sure what this has todo with economcis, except its efficient.

First point of empirical evidence which should bring about theory is as follows:
Money contracts are used to organize ALL market transactions between buyers and seller\s. Why? See my POST KEYNESIAN THEORY AND POLICY book!

Second: Money is that thing that the State designates will settle all legal contractual obligations. See Keynes’s TREATISE ON MONEY , Chapter 1 — and how I develop this in my POST KEYNESIAN THEORY AND POLICY book to define money not as a store of purchasing power but instead as a STORE OF SETTLING ALL CONTRACTUAL OBLIGATIONS!

It is because of this contractual use of money in all market transactions that Keynes’s liquidity theory becomes essential and Chapter 17 entitled THE ESSENTIAL PROPERTIES OF INTEREST AND MONEY becomes the key to understanding not only Keynes’s General Theory — but more importantly understanding the operation of a money using, market oriented economy where the future is uncertain — and fundamental uncertainty is tamed by the use of forward money contracts to achieve a degree of legal certainty regarding future cash in flows and cash outflows!!

Paul, when you raise expectations of an answer here regarding your own question, it is not appropriate to try and sell us to your book; nor do your interpretations of the facts (very illuminating in respect of Keynes’ focus on liquidity) answer the question Why? As a mathematician I would answer it in terms of it money providing a variable where goods provide a value; the fact that States don’t want to have to define in advance of knowing what they will need to buy is incidental; none of us do.

So back to Dave Marsay’s question and the subsequent Bayesian debate. “There are thus two versions of the GT to consider: as if ‘radical’ uncertainty is or is not important. If not, I find it hard to see how the GT could be useful today. So I would like to see a modern interpretation of the GT as if uncertainty matters. Does this make sense?”

Very much so. My modern interpretation of the GT (which I have had since reading the GT in 1973 as a follow-up to reading the TP (treatise on probability), is that it foreshadowed post-war developments in control theory, distinguishing information feedback based systems from a mechanical control system like a steam engine governor or safety valve, which would go up or down a bit depending on foreseeable uncertainties in the load, but in any case required a bigger error to effect the necessary adjustments. The name given to the new theory was ‘cybernetics’, meaning steering; but reflecting on this application it became evident that to direct a ship to port by correcting observed deviation from the compass setting due to wind and wave impacts was not sufficient; side winds, tides and local currents could take you away from the intended course, so that the course needed to be periodically reset. That is the equivalent of Keynes saying [in the context of wars and market saturation] we need to correct for being off course for the intended equilibrium, where relying only on our price compass would keep it that way. (Industrialists, incidentally, happily change course onto new product lines when apparently faced with market saturation, even where demand still exists but not the wages with which to buy it). Keynes’s solution added correction of employment errors to correction of price and market structure errors, using tax to redistribute incomes to make prices affordable. In this modern version of the economy being steered by an “invisible hand”, then, I am saying steering relying on the predictably self-cancelling random errors of price compass readings is not enough; one needs to correct errors (after they have happened and become knowable) due to not entirely predictable (even if systematic) accumulating side effects on the market’s potential customers.

This overflows into the bosh of the current Bayesian argument, which neo-classically trained mathematicians see as correcting estimates in light of additional information, so that if a wrong method is used to collect the information it will affect the additional information too. Bayes is not about number theory but about the possibility of gambling successfully, where the hypothesis that a die is in fact symmetrical may need to be corrected in light of throwing it. But this is an on-going game. One may form a new hypothesis by discovering an asymmetry or by splitting the difference, estimating the bias in terms of the mean between the previously predicted and experienced distribution, and repeating this until there is negligible difference. In quality control circles this manifests as the probability of a sample being drawn from an okay batch being qualified by a confidence factor in the sampling method given (canonically) the size of the sample. As I remembered, Keynes didn’t actually have this two dimensional understanding of probability in the TP, which amounts to checking the method of reaching a conclusion by using a different method. However, looking at the following quotes from Wikipedia, perhaps I was wrong.

Keynes’s conception of probability is that it is a strictly logical relation between evidence and hypothesis, a degree of partial implication. Keynes’s Treatise is the classic account of the logical interpretation of probability (or probabilistic logic).
[https://en.wikipedia.org/wiki/A_Treatise_on_Probability]

Thomas Bayes attempted to provide a logic that could handle varying degrees of confidence; as such, Bayesian probability is an attempt to recast the representation of probabilistic statements as an expression of the degree of confidence by which the beliefs they express are held. [https://en.wikipedia.org/wiki/Probability_interpretations]

Keynes advised Hill on the control theory aspects of automatic gun-laying in WW2. So your link between his prior work and control theory is not at all far-fetched. My own experience is that many of those who work on control theory have established some adequate fudges which save them from recognizing the deficiencies of a narrowly Bayesian approach. It may be that economics does not need to abandon narrow Bayesianism but to develop some good fudges, to replace the bad fudges such as equilibrium.

Reading TP before GT (as I did) does affect your understanding of both.

Paul Davidson is wright “ALL market transactions between buyers and seller\s.” should be organized as well as possible. But this harmonious agreement is possible if there is confidence between them. But it occurs sometimes by chance.

Before criticizing Keynes (as so many feel free to do, half a century after the man’s death), or generalizing about how “Keynes understood nothing about economics”, it would be wise to consider the following opinion. (And remember that Bertrand Russell was himself one of the world’s most intelligent men).

“Keynes’s intellect was the sharpest and clearest that I have ever known. When I argued with him, I felt that I took my life in my hands, and I seldom emerged without feeling something of a fool. I was sometimes inclined to feel that so much cleverness must be incompatible with depth, but I do not think that this feeling was justified”.

Regarding going back to Keynes’ TREATSE ON PROABILITY to claim that these TP statements are expressing Keynes’s thinking when he was writing the GENERAL THEORY, implies that Keynes never developed any further ideas about probability after writing this TP essay as an undergraduate at Cambridge.. Yet his 1937 QJE article on the general theory expresses something about probability not in the TP.

Also see Keynes’s article on Tinbergen’s method where he questions the homogeneity of economic time series. For, as we know, homogeneity is the equivalent of stationary and stationary is a necessary, but not sufficient condition for ergodicity. Thus nonhomogenity is nonstationary and that is a sufficient condition for nonergodicity.

Thus Keynes in questioning Tinbergen’s presumption of homogenity ,was therefore indicating Keynes believed economic time series were nothomogenious. and therefore not ergodic! And nonergodic systems do not permit the probability calculus to provide actuarial certainty forecasts of the future –as Keynes specified in his 1937 QJE comments about using the probability calculus makes errors in forecasting the future

This use of statements from Keynes’ TREATISE ON PROBABILITY as Keynes’ last words on the use of probability analysis and uncertainty would be equivalent to taking Keynes’s lecture notes that he presented economics at Cambridge in the 1920s and you claiming whatever Keynes said in the 1920s as the proper economic theory that Keynes believed was economics theory in 1936 and beyond. But we know that Keynes changed his mind as to what was the correct economic theory of the operation of a money using , market oriented entrepreneurial system from 1920s to 1936 – – and that 1936 statement of the proper economic theory was very different to what he professed was economic theory to students in Cambridge in the 1920s.

I shall put your article on my reading list, but I am off on holiday soon.

My understanding of the TP is that it was revised after his experience at Versailles and published alongside his ‘Economic Consequences’, in a way which ought to have met your concerns but which seems to me incomplete. But I do think that technically the TP read alongside EC covers much of the ground of the 1937 article. If you could refer me specifically to any new mathematics, I would be grateful.

The new thing in 1937 that I see is ‘[I]t is usual in a complex system to regard as the causa causans that factor which is most prone to sudden and wide fluctuation.’ This is not mathematical. It is psychological with an impact on descriptive economics, but I think it quite wrong. In 2007/8 were people really paying attention to the factors that were prone to cause the crisis? More likely, they paying attention to those things that they thought were prone .. . The TP is full of similar mistakes, but the actual mathematics still seems to reflect Keynes’ mature position and to be correct (apart from some relatively minor technical details).

For example, your comments on ergodicity and homogeneity seem to me well covered by the TP. The early part is all about the impossibility of general calculus.

I do not understand your claimed equivalence between a theory of probability and a theory of money. The first is timeless and universal (if correct). The second is necessarily contingent on things that change with time and place. (Keynes worked before globalisation.) Adding to a mathematical theory is a good thing, but substantive changes simply show that the old theory was wrong. Is the mathematics in the TP wrong? How has it been added to?

paul davidson

November 27, 2016 at 6:33 pm

connection between probability theory and theory of money.. In an economic system where decision makers know they cannot know the future outcome in real terms, the use of money contracts [spot and forward] permits decision makers to have a degree of legal certainty regarding cash inflows and cash outflows

Email subscription to this blog

RWER 26,498 subscribers

Real World Economics Review

The RWER is a free open-access journal, but with access to the current issue restricted to its 26,498 subscribers (07/12/16). Subscriptions are free. Over one million full-text copies of RWER papers are downloaded per year.

—- Forthcoming WEA Paperbacks —-

———— Armando Ochangco ———-

Shimshon Bichler / Jonathan Nitzan

————— Mauro Gallegati ————–

————— Herman Daly —————-

————— Asad Zaman —————

—————– C. T. Kurien —————

————— Robert Locke —————-

Guidelines for Comments

• This blog is renowned for its high level of comment discussion. These guidelines exist to further that reputation.
• Engage with the arguments of the post and of your fellow discussants.
• Try not to flood discussion threads with only your comments.
• Do not post slight variations of the same comment under multiple posts.
• Show your fellow discussants the same courtesy you would if you were sitting around a table with them.