Meta

RSS Links

SEARCH

Scientism in the Way of Science

November 30, 2010

by Gene Callahan

I repeatedly find attacks on positions in the social sciences made based on extremely limited and, frankly, antiquated views of how the physical sciences proceed. I will give one example from a rightist criticism of a leftist view, and one that is a leftist criticism of a rightist view, to illustrate that my point has nothing to do with ideology — or perhaps, that it has to do with the way ideology can lead one to embrace flimsy criticisms of other’s positions.

The first excerpt is from Hunter Lewis’s book, Where Keynes Went Wrong:

“In chapter 15, we saw how Keynes wrote N = F(D), which means that employment, denoted N, is a function of demand. Demand however is defined as expected sales, not actual sales. We noted that expectations are not a measurable quantity and thus do not belong in an equation.”

Well, one way to measure these expectations would be to walk around and ask the entrepreneurs “How much do you expect to sell this year?” then total up those amounts. Why in the world this would not be a fine measurable quantity is unclear.

But perhaps even worse is Lewis’s contention that only a “measurable quantity” belongs in a mathematical equation. So, let us strike pi from all of our equations, and e, and, most certainly, i! All complex numbers must be banished, and negative numbers are fairly suspect as well.

Furthermore, most of the entities dealt with by modern physics are not directly measurable. Instead, what we measure is a dial reading or a trail on a photographic plate, things which require a great deal of theory to connect them to entities like electrical fields or positrons. As the philosopher Susanne Langer wrote:

The sense-data on which the propositions of modern science rest are, for the most part, little photographic spots and blurs, or inky curved lines on paper. These data are empirical enough, but of course they are not themselves the phenomena in question; the actual phenomena stand behind them as their supposed causes… we see only the fluctuations of a tiny arrow, the trailing path of a stylus, or the appearance of a speck of light, and calculate to the “facts” of our science. What is directly observable is only a sign of the “physical fact”; it requires interpretation to yield scientific propositions… and [realizing this,] all at once, the edifice of human knowledge stands before us, not as a vast collection of sense reports, but as a structure of facts that are symbols and laws that are their meanings.

(Philosophy in a New Key)

And surely this was what Keynes thought: aggregate demand may not be directly observable, but we can formulate laws by which its effects are observable, for instance, in a recession. Now, whether he was correct or not is not my topic, but there is certainly nothing unscientific about his hypothesis.

The second excerpt is from a history of marginalism at The New School for Social Research:

“However, [marginalism’s] Achilles’ heel was the very notion of ‘marginal utility’. Marginal utility, let us be frank, is hardly a scientific concept: unobservable, unmeasurable and untestable, marginal utility is a notion with very dubious scientific standing.”

Unobservable, unmeasurable and untestable — like, say, infinitesimals in calculus! (And people like Berkeley directed just such criticism at infinitesimals and other mathematical notions.) Once again, we have some unfounded belief that scientific entities must be directly observable, rather than observed by their hypothesized effects. (And certainly the theory of marginal utility predicts many observable phenomena, such as the lack of a price for air in normal circumstances.)

40 Responses to “Scientism in the Way of Science”

This is a great post.
What about the issue of methodology?
Ludwig Lachmann (in his fascinating article “John Maynard Keynes: A View from an Austrian Window,” South African Journal of Economics 51 (1983): 253–260) argued that

“In the field of methodology Keynes and the Austrians agree that economics is a social science to which methods that have proved successful in the natural sciences should not be applied without careful inspection, and that, in particular, all attempts to ‘give numerical values’ to the parameters of economic models ignore the essential meaning of economic theory. It is hardly surprising that even here we find differences of accent and perspective, but … they do not amount to much …. Keynes concurs with Hayek’s misgivings about numerical values. (p. 256).

Both Hayek and Keynes were sceptical about the econometrics too.
See also John Pheby:

One important point is that the Popperian revolution in the methodology of the natural sciences has given us Popper’s critical rationalism, with the emphasis on falsification.
The old logical positivist, classic empiricist methodology for the natural sciences (with the problem of induction) is rejected by many working scientists today. Some argue that the methodological differences between the social sciences and natural sciences have actually narrowed quite a bit.

Very interesting read indeed for me to see a social scientist’s interpretation of what the proper limits of science (physical & social) would be according to one perspective of what qualifies as “scientism”. Since I usually agree with most blog posts here I won’t shy away from stating a few objections.

Regarding Hunter Lewis’s critique of Keynes’s use of immeasurable quantities in trying to yield useful results using mathematical tools: I don’t believe that the comparison of the immeasurability of expected sales and immeasurability of irrational/transcendental numbers such as pi &/or e is a good example for the case you are trying to make. In applying mathematics to our attempts at understanding the “real world” context is everything and I believe immeasurability is not the only objection that could be made against the use of expected sales in mathematical formulae. It’s the type of immeasurability. Geometry is not a social science and vice-versa and our brain’s ability to refine an abstract concept such as a circle or a the hypotenuse of a right-triangle having adjacent sides both equal to one (as opposed to our ability to accurately assess expected sales which may be “measurable” but of what use is the value obtained if it firstly: is not the case that it is directly proportional to demand, 2ndly: even if it were, what are the factors of proportionality? 3rdly: is the method of obtaining the value accurate or is it based on some statistical survey whose error may affect the model’s accuracy? I can go on and on…e.g. how reliable is the data gathered? With the irrational numbers, we’re talking about simple systems many of whose “truths” are directly accessible to our senses and intuitions, but even logic has been shown to support the basic truths of the relationships that exist between such simple mental constructs. I doubt whether logic can do the same for the supposed relations that exist between the objects of economists’ interest.

Regarding negative numbers: they can be measured in the real world if we take their sign to mean direction.

Similarly for complex numbers: they can be thought of as vectors in space (a many other useful objects) as one example of their usefulness in navigation, stress forces etc.

Natural & rational numbers may be “measurable” but they too are nothing more than the result of our intuitions’ attempt at abstracting the sense-data picked up in our environment as we find our way through the natural world. The relations that exist between them though are true independent of us, as Gödel has shown, and as the Universe doesn’t need us for it to exist.

Regarding your statement: “Furthermore, most of the entities dealt with by modern physics are not directly measurable.” Instead, what we measure is a dial reading or a trail on a photographic plate, things which require a great deal of theory to connect them to entities like electrical fields or positrons.”

– Not being directly measurable does not mean the same as immeasurable. In fact, as long as the “dial readings” or tools of physicists are calibrated correctly they in effect act as our eyes, ears, touch, etc. In fact data gathered in this way is equivalent to there being a super human capable of seeing the smallest/greatest objects, hearing the faintest/loudest sounds etc. as long as the devices are properly calibrated. This in no way disqualifies the quality of the data being gathered or renders it less “real”/useful. The measured results don’t need theory to connect them to “entities like electrical fields or positrons”, electrical fields and positrons ARE the theory! The measured results are either in accordance with, or help to refute a theory, but they don’t need a theory to connect them to more “theory”.

“And surely this was what Keynes thought: aggregate demand may not be directly observable, but we can formulate laws by which its effects are observable”

– He was wrong, you’re not all that off. True, objects to be studied need not be directly observable to formulate laws, (e.g. quantum mechanics) but their effects must be measurable (not directly observable) in order to give said laws any practicable value. Laws whose results do not closely correspond to the empirical evidence they attempt to describe are useless.

With respect to your final paragraph: I completely agree that the concept of marginal utility (like the concepts of infinitesimal calculus, though not for the same reasons) is a greatly useful tool/abstraction that indeed helps us to make sense of the “real” world. Berkeley was wrong. In a sense neither infinitesimal calculus or marginal utility can be “directly observed” but our ability to conceptualize these useful “objects” is a direct result of our species’ long attempt at making sense of the world in which we live, always using nothing more than our senses and our reason as our guides in interpreting many observations into great (and useful) inspirations.

I agree that scientism is a great danger to the much needed progress in the social sciences. A real understanding of pure mathematics, mathematical applications in the physical sciences, methods of the physical sciences, error analysis, by social scientists can go a long way to clarifying the limits and applicability of these methodologies to the more complex systems of the social sciences.

Read Jacob Bronowski’s “The Origins of Knowledge and the Imagination” to get a great epistemological bridge between these two worlds, one I’m sure Hayek would approve of.

If I may add to this discussion on Keynes’ use of the concept of “aggregate demand.”

I would suggest that part of the “Austrian” criticism of Keynes’ aggregates is that there is nothing called “demand for output as a whole.” There is no such good or commodity called “output” and there is no demander in the market who demands the “whole” of it.

There are, in reality, only the demand for individual and specific goods, demanded by individual and specific individuals.

Now one rebuttal at this point might be that this means one cannot or should not “aggregate” even individuals’ “individual” demand curves for individual goods. And that might throw out even much of microeconomic analysis as both Austrian and mainstream economists view microeconomics.

But if we ask what defines “demand” or “supply” it is usually considered to be all units of a “good” that from the perspective of agents in the market are viewed as homogeneous and interchangeable for a purpose in mind.

(Thus, we might not put under the same market demand or supply curve two quantities of some commodity on “today’s” market in a specific location if they are separated by space and time. And this, regardless of their possible similar physical qualities or characteristics.)

Thus, we can talk about the respective “market demand” for Pepsi or Coke (assuming the demanders’ view these as “different” goods, say, due to taste, degree of carbonation, etc.) Or we can speak about the “market demand” for “cola” drinks that then incorporates both Pepsi and Coke, if from the demanders’ perspectives they are viewed interchangeable and homogeneous units that serve the satisfaction for a cold, carbonated drink.

But this would not apply to “output as a whole.” Yes, you can add up the money expenditure for “everything” at alternative average “price levels” for “output as a whole.”

But these are statistical creations of the analyst that have no relevance or causal counterpart in the actual decisions and actions of actors in the market.

There is no such thing as “the price” (the price level) for output as a whole. There are only the individual prices for specific goods, that, ex post, the economic statistician adds and averages (by one formula or another). But it does not “exist,” and does not impact on people’s decisions and actions in the market.

(Yes, a unit of money does have a value, or purchasing power, in the market. But this is represented by the array, or set, or network, of the structure of individual ratios of exchange between money and each of the individual goods against which money trades in the market. And, the scale of this structure can, in general, be “higher” or “lower,” as well as experience changes in relative relationships among money and individual goods.)

Furthermore, attempting to aggregate the demand and supply of “output as a whole,” also does away with something else that is normally considered essential to market analysis. By summing up “everything” under one demand curve, it submerges, hides, and eliminates all relationships of substitutability and complementarity among goods. It weakens, if not abolishes, a focus and attention to trade-offs and opportunity costs normally considered essential in the analysis of market events.

Thus, one eliminates all the microeconomic relations and patterns, and interconnections that would enable an “alternative” analysis and interpretation of how and why there can come into existence wide and significant declines in many outputs and many employments in many inter-related markets across the economic system.

Does Hayek not aggregate in “Prices and Production”? Yes, but notice: He never aggregates away the relative price and production-structural relationships and inter-connections that permits (given his interpretive schema) a form of “microeconomic” analysis and explanation of what has come to be called “macroeconomic” outcomes.

Thus, in any such analysis, it seems to me, there must be maintained a degree of “methodological sufficiency,” for the use of some term. That is, the analysis must be able to tell the “story” that the analyst considers meaningful and relevant to explain and interpret what he is studying, and be consistent with certain essential conceptual requirements without losing its moorings in micro-foundations.

To most Austrians, the standard Keynesian conceptual constructions are untied from a variety of these essential and elemental micro-foundations. And, therefore, eliminates the possibility for a more fruitful account of economy-wide fluctuations in those many outputs and many employments.

I generally agree with Gene. However, there is a distinction between social and natural sciences. Social sciences, unlike natural science, study human behavior and action.

Thus in the natural sciences the findings and models remain stable or valid for a long period, thus can be seen as theory.

In the social sciences, however, findings are often very time and place sensitive. Thus analysis of history or certain events is fine, but forecasting or using these findings in different settings is problematic due to changes in institutional structures, human preferences, beliefs and interactions. One can of course assume that everything remains stable, to forecast, but this assumption is very strong as forecast errors show over and over again.

Therefore inductive findings from e.g. time series analysis or experiments should, in my opinion, not be used as way to form an economic theory but only to judge historic events or disprove a theory ex-post by the data. But this is possible and I do not see why it should not be.

Obviously (I think), we can learn from history and institutional analysis.

“commonsense” wrote: “In fact, as long as the “dial readings” or tools of physicists are calibrated correctly they in effect act as our eyes, ears, touch, etc. In fact data gathered in this way is equivalent to there being a super human capable of seeing the smallest/greatest objects, hearing the faintest/loudest sounds etc. as long as the devices are properly calibrated.”

Wow, this is the most moronic form of naive empiricism I have ever encountered!

I pretty much agree with Gene, though I agree with Andreas Hoffman and Richard Ebeling too.

One thing I’ve noticed about these discussions is that it isn’t very useful to argue about whether things are a “law” or not. Natural scientists are not in agreement about what’s a law in that area either.

Richard Ebeling says there’s no such thing as “demand for output as a whole.” Yet firms, especially large ones, pay close attention to projected GDP, total consumption, business investment, etc.

In fact, many of Keynes’s aggregrates now belong to the conceptual apparatus — the subjective orientation — of countless decision makers. Should we banish these aggregates from our thinking because they don’t “really” exist, or should we accomodate them because they belong to the outlook of a great many market participants?

You know, I should not have been so harsh in my previous comment, but, really, what in the world could it even mean to “see” a neutrino, given that it “is an elementary particle that usually travels close to the speed of light, is electrically neutral, and is able to pass through ordinary matter almost undisturbed.” There is not any possibility of us “seeing” such an entity, however much we might enhance our sensory apparatus. No, we measure macro-level phenomena, and extrapolate to the neutrino through a welter of theory.

And, Richard Ebeling, I agree, that fundamental difference exists, but that is orthogonal to my point, which is that these critics have not even comprehended the physical sciences properly, quite aside from the difference you point out.

“Ecnoomists should abandon aggregates because they mask reality from our view. They are a selnseless source of naive notions and endless errors.” There’s no argument here, just assertions.

Next, you add, “What do you think biology would look like if we studied ‘aggregate life’? What on earth would that nonsense even mean?” Here, at least, we have the glimmer of an argument, but, unfortunately, it misses the distinctive features of human life.

What’s the difference between an ant colony and a city? Well, for one thing, the residents of a city think about their neighbors, fellow citizens, etc., with the help of concepts, like “neighbor” and “citizen,” which orient their interaction.

In contemporary economies, many decision makers (and not just Keynesian economists) think and talk in terms of aggregate demand, etc. I thought Austrians were interested in the subjective orientation of agents rather than in the mechanics of ant farms, but perhaps when these orientations no longer fit your theory, you feel the need to banish them as “nonsense.”

And if you think aggregates “are a selnseless source of naive notions and endless errors,” then why don’t firms whose managers think without aggregative concepts, and the “endless errors” that follow, drive the rest of the firms, which consult GDP forecasts, etc., out of business, what, with the survival of the fittest and all that?

No firm, no matter how large or small, makes its actual business and investment decisions on the basis of “aggregate demand” or “total output,” or “national income.”

At the most, these are indications that some might try to read as “the general business climate,” if that is considered relevant to their decision-making.

In fact, what any firm is guided by are expectations of future demand for their specific product or service, relative to the specific factor (and related) prices upon which they base their “cost of production” estimates.

It is always relative demand and relative price relationships that determine any actual firm’s likely profitability.

Even the expected “general rate of inflation” will tell very little about any firm’s possible profit prospects. It will be whether, in a market climate in which many prices may be rising, whether they anticipate that their selling price will be going up faster or slower than their expected factor-price costs.

That prices on average are rising, say, 3 percent, and cost-prices (wages, etc.) are rising on average, say, by 2 percent, tells nothing about the specific price for their particular product, or the specific factor cost-prices relevant for their investment and production decision-making.

And, yes, Gene, I apologize that I have been one of the “guilty parties” for getting the thrust of your original argument off-track.

You claim that “no firm, no matter how large or small, makes its actual business and investment decisions on the basis of ‘aggregate demand’ or ‘total output,’ or ‘national income.'”

To cite but one falsifying example, large electric utilities in the Pacific Northwest include projected U.S. GDP, or something like it, as an input to their energy demand forecasting models, which, in turn, inform their investment decision making. If you really look, I think you’ll find many firms that use forecasting models which incorporate aggregate demand, in addition to other aggregates, as inputs to their sales forecasting models.

And the fact that wages in many labor contracts are tied to the CPI would seem to falsify your claim that “average” prices “tell us nothing about . . . the specific factor cost-prices relevant for [firms’] investment and production decision-making.”

Your comparison to an ant colony and a human city doesn’t make any sense to what I was saying.

Aggregates have nothing to do with subjective human decisions. Aggregates are a mistaken attempt at “objectivity.” Which suggests you either don’t understand subjectivity or aggregates.

Arguing that companies are using some bit of nonsense promulgated by a majority of economists, who they msitakenly believe know something about the economy, proves nothing. If everyone is using the same nonsense, then everyone is making the same errors. This has happened throughout the history of science. Economics is no better here.

If Walmart had paid attention to aggregates rather than to their own data, they would have laid people off during the recession — and quite mistakenly, too.

Gene: “I repeatedly find attacks on positions in the social sciences made based on extremely limited and, frankly, antiquated views of how the physical sciences proceed.”

And I’d add that since these views themselves continue to evolve, one searches in vain for methodological consensus in philosophy of the natural sciences from which to attack one’s opponents in economics, even assuming that a correct methodology of the natural sciences could be expected to carry over. Indeed, one might better seek in economics a philosophy of the natural sciences! Which brings me to an occasional shameless, mildly off-thread indulgence in self-promotion of an obscure paper that I think deserves to be a bit better known: Science As a Market Process. Google along with my name. Comments welcome at awalstad@pitt.edu

Well, I think the argument against the use of models is different from that which you’re stating. There are three or four major lines of argument in your posting all making assumptions about ‘science’ and the scientific method.

Marginal utility is an expression of the relativity and subjectivity of value, and the plasticity of utility, and the dynamic variability of value in real time. This creates a set of variables that lead to the effective uniqueness of each object in time for many (if not all) objects, which in turn leads to the categorical error of aggregation when applied to quantities, each of which includes necessary errors due to aggregation. And this error of aggregation is the reason for non-prediction. And therefore non-prediction is caused by the very reasons austrians stated. That in the aggregate much of this can be modeled, is true, at least for many commodities.

Objects in physical space have a prior course. So do human events. We can measure the delta in the course of physical events, but CANNOT measure the delta in the course of social events. That is the simplest statement of the problem. It is that social events CANNOT be measured because they are temporally unique.

And further, Marginal utility is absolutely testable (and has been.) So I don’t understand, or rather, you could be making any number of points, and its unclear which.

Marginal utility is a categorical description of a visible, measurable process, whenever that process results in an exchange (at least.) True, we cannot know the opportunity costs paid by individuals, but we can measure whenever they do act to exchange goods or services.

Instead, the criticism is not on grounds of material measurement, but that :

a) empirical models in the social sciences are not predictive and are even inversely predictive in relation to their utility in time.

b) that they are consistently not predictive (although they are descriptive of the past) and therefore false, and

c) that as demonstrably false, they are unscientific. That due to subjectivity and innovation, plasticity of utility, and the resulting heterogeneity of capital, and asymmetry of information, shocks and the vicissitudes of time, they are logically destined to be false. (ie: it is not the use of measurement, it is the use of measurement to determine causality – not correlation but causality – that is scientific.)

d) that the use of false, non-predictive, arguments are used to justify implementing dangerous risk-accelerating unscientific policy.

e) that we cannot model what might have been, had we not used false models to enact policy, and therefore calculate the real cost of policy. (ie: we cannot compare what might have been with what has come to pass, and sum our costs plus our profits.)

f) that by implementing such policies we expose ourselves to and indeed, encourage greater risk. Ie: the austrian business cycle of booms and busts.

g) that it appears, that in history, whenever the commercial sector grows faster than the state can regulate it or redistribute the capital from it, and form a predatory bureaucracy upon it, the results for the entire society, at least narratively if not certainly empirically, seem to be better than those where state intervention has occurred.

Meaning that the Austrian criticism is that the use of the calculus of measurement in heuristic social processes will result in non-prediction and exacerbation of risk — or at least, such models will be limited to prediction based on the asymmetry of information discovered by the act of building the model, but not of the asymmetry of information yet to be developed by innovation or shocks, and therefore undiscoverable by the process of building a model.

I would argue that Keynes covered these problems in A.T.O.P. Although I am not a scholar of his work. And that austrians agreed with him on many of those positions. But the PRACTICAL matter is that the profession is heavily invested in a technology that demonstrably does not work, yet is relied upon for policy decisions every day.

Models are a superior means of describing causal processes where language and human limits to conception fail. However, tehy rely upon a mathematics derived from the much more simplistic physical sciences. And until we can measure the ‘natural forces’ of men’s mental capacity, which are largely the properties of memory in time when in the presence of vast information, we have no formulae by which we can call our efforts sufficiently scientific rather than simply a convenient means of toying with economies against the will of those struggling with knowledge and capital to avoid and circumvent all that toying.

So either I don’t understand, or I do not think your criticism is founded. The people that criticize empiricism may not be using a substantive foundation either, and may justify sentiments and intuitions with false appeals to reason that they do not fully understand. But I do not see how your criticism is logical in the context.

Looking forward, solving the problem of induction instead of relying on (false) equilibria relies that we understand, and develop a formulae what might best be called ‘velocity’, which is the rate of innovation given the limited ability of the human mind to make ‘jumps’. Therein is a formula of greater importance than E=mC^2. And because that velocity can be known, probabilism will have a rational boundary, rather than the irrational boundary we have conveniently constructed out of historicist necessity.

I hope I have been sufficiently cogent on a subject of complexity that has admittedly exhausted many of our best minds. And apologize in advance for my failures.

NOTE: I think that Gene’s argument is a bit clearer now that I have read comments by others. And perhaps I’m adding additional vectors of inquiry rather than debating his position.

Gene’s argument is that people from the physical sciences argue that economics is not a science and he counters the grounds on which their criticisms are based. I interpreted his posting that people from the psychological school were forming the criticism against positivism in economics.

Gene’s criticisms are correct, in that mathematics relies upon incomplete approximations that are convenient contrivances, and that economic science relies upon similar assumptions, so he is attacking the physical sciences on their methods – saying their criticisms of social sciences are hypocritical.

I would argue that since the velocity of the transfer and transformation of energy in time and space is knowable, and that the same velocity of knowledge is not yet knowable, that probabilism in the social sciences as we understand it – and as I have stated previously, is unscientific. Not simply beause the methods are logically false, and because the predictive capacity of our methods are false, but because NOT USING THEM appears to produce better results than using them.

And because of that, while results in the physical sciences have neutral consequences (or perhaps do not have moral consequences – those that affect others without their consent) that consequences of failure necessary for testing in the physical sciences creates negative externalities, as well as being simply counter-productive in the social sciences.

The problem is not just moral deprivation of choice as libertarians often argue. It appears that the use of Positivism in the social sciences (at least in monetary policy that seeks to reduce unemployment) is not only illogical, it is simply harmful.

@Koppl
?? Godel ?? Godel’s incompleteness theorem. The Misesian argument is an a priori one, which advocates purport to be a closed system. It’s not, at least as it is currently expressed. (If you are not deeply into this nonsense these arguments are messy.)
RE: “warnings”. Is the missing comma in my post confusing? “Both are open to Godel, and Popper’s …” It’s Popper’s railroad track, Godel’s incompleteness.
I’m travelling, don’t have access to the books, and I can’t find the Popper reference on the web. I’m not sure, but I think it’s in CR. Maybe OU, but pretty sure CR. I’ll post on my site or here when I find it. He is making a more loose statement of Godel’s theorem when applied to methods and the history of ideas.

Godel’s famous theorem of 1931 has to do with systems that generate the arithmetic of whole numbers. It seems a bit dicey to draw implications of that stuff for praxeology. Mises’s methodological dualism ruled out physical determinism as an issue in social science. He said you gotta start, in some sense, with people’s thoughts and values. I just don’t see where that puts you in “deterministic directions.”

Determinism is one way to look at the coin. Popper on the other hand is arguing for the knowledge left undiscovered or uninvented by relying upon methodologies that expressly eliminate possibilities. This is the other half of the Godelian insight (and why computers crash – in a more practical sense) — that any system employing finite testability as we currently understand it is necessarily incomplete. And as incomplete it leaves undiscovered or uninvented possible solutions.

But most importantly, because the argument as currently used by the people Gene is indirectly referring to, is based upon a priorism vs positivism.

We can argue individual approaches to the problem of induction, or the problem of information loss in aggregation. It is specious to argue what each philosopher said, and important to argue his contributions to solving the problem that is yet unsolved: prediction in the social sciences, and the morality of positive experimentation based upon what is clearly logically and demonstrably false premises.

I’m not arguing against misesian subjectivism. I’m arguing that as his argument is currently used by followers, it’s closed due to claims of apodeictic certainty – as it is used as a tool for combatting positivism. When in fact, the apodeictic certainty of the a priori argument is simply false (because people do not clear preferences as stated by the overly-trading-oriented Mises and the overly political Rotbard).

Quantitative prediction fails, and misesian apriorism fails, because we do not know the answer to the problem of induction, which is the mathematics of the properties of the human mind and the velocity of POSSIBLE innovation that can arise from the human mind given the interaction between existing symbols, and stimuli it receives in time. (Mandelbrot’s implied geometry of conceptual innovation versus our existing historicist probabilism).

ie: both methods of inquiry are false — both the empirically positive and logically subjective methods fail us, they are just the best we have. We currently argue two points on a line: probabilistic versus subjective, instead of three triangular points: probabilistic versus subjective versus geometric (the third being the unknown mathematics of induction).

However, in the battle between historical/predictive/empirical and the subjective/apriori, the subjective methodology seems to produce less HARM to humans and economies, and to encourage more innovation, while we are in the process of working out the problem of induction. (I”ll leave the reasons why the subjectivists ignore certain realities – which is the point of Gene’s argument – off the table for now.)

I am sitting here typing, hoping that you’re not one of those people I have to debate the difference between a philosopher’s intentions and statements and the implications of his statements as part of the body of philosophical work existing in the history of ideas, and how they relate to solving a few fundamental problems. :) Cataloguing other people’s ideas isn’t philosophy, it’s the academic fallacy. They are meaningless without understanding the underlying problems all philosophers contribute fragments to the solution of. One cannot go from books to problems, but must go from problems to books. :) For the very same reason we are having this discourse: the frailty of human reason and the vast ignorance we face of our own inner workings as we cast a dim light into the darkness that is time, ignorance and necessarily kaleidic human universe, as we, in a vast division of knowledge and labor attempt to profit by outwitting time.

So it is possible to interview every entrepreneur in the US? Who by the way counts as an entrepreneur? And isn’t it worth noting that every Keynesian economist since Keynes has simply dropped the word “expected” from the terms and gone on?

OK, I don’t want to geek out too much on this forum, because I get enough flack for that already, I now anticipate having to explain the importance of natural numbers in economic analysis and the relationship between marginalism and subjectivity (in which there can only be natural numbers – because every object traded under marginalism is unique, and the errors we create are those of aggregating non-uniqueness.) And therefore how that dependency makes the problem of economic empiricism a function of Godel’s proscribed limitations. Although if I could find someone smart enough to have that conversation with it might be really interesting…. (Hoping you might be?)
Curt

Aren’t you changing the subject, Curt? You first seemed to say that Godelian math somehow kills praxeology. You seemed to suggest that there is an infinite set of true, but unprovable praxeological theorems. BTW: I don’t see why that would have been a criticism of praxeology even if the inference had been valid.

Now you are suggesting that economics is in some sense discrete, and you infer that “economic empiricism” is therefore limited in some way. I don’t see how the one thing connects to the other. Mises famously denied that there are any constant relations in human history.

Godelian issues can indeed be subtle, so we should probably be extra careful to talk about one thing at a time when such issues are afoot.

I only now got your long post above. I think it came up with a delay. There is quite a lot there to chew on, so I’d better hesitate before attempting to add anything more to this conversation. I need time to digest. We can certainly agree on at least one point: the frailty of human reason.

Actually I’m subtly suggesting that you are worried about what Godel said rather than the argument conducted here and thereby changing the subject inadvertently. :)

RE: You first seemed to say that Godelian math somehow kills praxeology.

I said praxeology as it is used (today) in the argument against economic models is subject to the same flaws as is the physical sciences use against against economic models. And I am pretty sure that’s a true proposition. I suggest that Godel’s criticism applies, because under marginalism, each object is unique, and therefore a natural number.

RE: You seemed to suggest that there is an infinite set of true, but unprovable praxeological theorems. BTW: I don’t see why that would have been a criticism of praxeology even if the inference had been valid.

I’m saying that the praxeological argument against models in the social sciences is predicated on the superiority of a set of descriptions of human action, posited as a priori, which are either false or insufficient, because they are posited by mises,rothbard and hoppe, as complete when they are in fact, incomplete due to the difference in observable properties of a transaction and unobservable preferences that cause it, and the erroneous preference-stack, rather than preference-network used to justify those arguments. (They do not deny most of this – they simply argue that they are unimportant or unscientific because they are unknowable, when in fact, external preferences appear to increase in priority as prices decrease.) That something is very difficult to measure or observe does not mean it is not a causal property. (Getting back to Gene’s correct argument above.)

RE: Now you are suggesting that economics is in some sense discrete, and you infer that “economic empiricism” is therefore limited in some way.

I’m sorry, I don’t follow that. I imply that models that rely upon our current use of historical probabilism and general equilibrium, even if highly specialized, are logically false, demonstrably false in prediction, yes, and therefore while using measurements and math, are less scientific than they purport to be. (Again, Gene’s argument is correct above). But even practitioners do not disagree with that proposition, they simply say that it’s the best we currently can do. Which falls back to the argument whether the DSEM model, and solving for unemployment is superior to the subjectivist model, solving for innovation. I think innovation wins, at least over time. I don’t see why that is controversial. I think the debate in general is over the balance between the two – staying Pareto efficient using both methods.

RE: I don’t see how the one thing connects to the other. Mises famously denied that there are any constant relations in human history.

Which is a very convient proposition, isn’t it? It simplifies the problem (artificially). Same for Rothbard. They’re both wrong. People have to ‘pay’ for the institutions around them (including the institution of property) by forgone opportunity, and they also pay for options on abstract social ambitions, values and preferences using foregone opportunity costs. Hayek failed to address these issues as costs, possibly too distracted by his work in psychology. Land holding sentiments (which mises and rothbard conveniently ignore, but Hayek doesn’t explicitly, are necessary for the formation of a state.)

Follow any economists argument against his methodology until you get to the point where he says ‘but that isn’t a subject of economic inquiry” at which point, he’s now illustrated his failing.

What I don’t like about this debate is that everyone is arguing about the failures of their method, rather than the failures of all methods to solve this problem. It’s the 8 coins and a balance scale logic puzzle. There are actually three places to put coins – the third being not on the scale.

In response to Troy Camplin (and, to a lesser extent, to Richard Ebeling and a few others),

I argued that cities are different than ant hills because, unlike ants, human beings think with concepts such as “neighbor,” “stranger,” and “citizen,” which are essential in understanding social interaction in the city.

Troy replies: “Are not humans alive? You make my point.” What?

I pointed out that the conceptual apparatus
which informs the decisions of many contemporary market participants includes a variety of aggregative concepts. (Here are a few examples from one day’s headlines on Bloomberg: “Stocks, Futures, Aussie gain on job growth, Japan GDP, Copper hits record”; “Manufacturing jump to boost prospects for January rate rise, India credit”: and “Australia payrolls rise 54,600, more than twice estimates, currency jumps.”)

Troy replies: “Aggregates have nothing to do with subjective human decisions. Aggregates are a mistaken attempt at ‘objectivity.’ Which suggests you either don’t understand subjectivity or aggregates.”

Huh? To say that many decision makers pay attention to changes in GDP, in stock market indexes, in total payrolls, etc., is not to say the GDP, the S&P 500, and total employment are agents who make decisions. Rather, they’re concepts through which people comprehend their economic environment.

Troy replies: “Arguing that companies are using some bit of nonsense promulgated by a majority of economists, who they mistakenly believe know something about the economy, proves nothing. If everyone is using the same nonsense, then everyone is making the same errors.”

Really? When I offered examples of market participants who organize their thoughts about the economy with the help of aggregative concepts, I wasn’t arguing that these concepts were necessarily the most efficacious concepts, so Troy calling them “nonsense” is beside the point.

That said, Troy and Richard must find it a bit perplexing that competition has yet to drive this “nonsense” out of the market place. Within hours of the announcement of Obama’s deal with the Republicans on a tax package, Goldman Sachs published revised forecasts of U.S. GDP, the unemployment rate, etc., presumably because they believe this information will be useful to their clients.

Given Goldman’s success, one might think these aggregates can, in fact, be useful in making investment decisions. Or one could say, as Troy does, that “if everyone is using the same nonsense, then everyone is making the same errors,” and, therefore, Goldman’s success is either a matter of luck or due to the fact that Goldman is better at this “nonsense” than most others. In either case, it’s hard to square this story with the Austrian view of competition as a mechanism that favors the astute and punishes the error-prone.

A majority of Americans believe in creationism. One would think that competition in the sense of the absolute success of Darwinism (broadly defined) would have driven this nonsense out.

Socialism is an incorrect understanding of human nature and of the economy. One would think that competition would have driven this nonsense out.

Related to both of these: a majority of people believe that if you have order, you have to have an orderer.

One may note that some past nonsense lasted a long time: such as the sun going around a stationary earth. It sometimes takes a very long time for nonsense to go away.

If the nonsense is not quite pernicious enough, it can last a very long time. However, over time, enough errors accumulate because of it that a paradigm shift occurs.

I will note that you have a confused idea of aggregates. Copper would not be an example of an aggregate, though GDP (which has all sorts of problems going for it) would be. “Metals” would be an aggregate; copper would be disagregated out of metals. If I told you that metal prices were going up, and you got excited and went out and bought some rhodium, you’d be very disappointed. You’d be making a stupid decision because you were paying attention to the aggregate “metal” and not to the details.

The proposition that includes the term ‘Aggregates’ refers to the simple problem that, under subjective value and marginal utility, objects are unique in utility and value in time and space. And therefore that causal relations between humans and objects in time and space are unique.

In your critique above, you’re confusing the utility in the private sector, of aggregates for the purpose of decision making with policy prescriptions derived from the use of those models as predictive.

Prices are aggregates as we use the term. There are prices offered, prices transacted, and a record of transacted prices. We aggregate these. There is predictive value in that aggregation. But each price paid is unique in time and space, and the cost to the individual, because all costs are opportunity costs, is invisible in each of those transactions.

There is then, a difference in cost to individuals if the information held in prices, and in other aggregates is asymmetrically known. There is an advantage to the one with more knowledge. Hence why prices and aggregates are valuable (and necessary) to actors in an economy. Something you state correctly above.

But we cannot know the difference between what might have been and what has turned out to be, given the presence or absence of information. And we cannot know the difference between what could have been or might have been if we had not enacted monetary policy that affected those prices by way of regulation, taxation or interest rates. Such actions are kaleidic in their effect on causal relations.

The issue is not whether the market can produce aggregates for the purpose of assisting actors in participating in the market. The issue is whether MANIPULATION of regulation, currency, prices, or whatever, based upon the use of aggregates in MODELS and therefore in PREDICTION – especially given the inter-temporal transfers that interest rates and liquidity enable — produce inferior or superior results to having NOT acted. And furthermore, that the since the opportunity costs paid by individuals who were subject to this manipulation are invisible, that they are unmeasureable. The argument is that he use of such policy is actually more harm than good in the long run, and this seems to be supported by the evidence.

So you’re confusing the use of aggregates, and the fact that people use them despite their invalidity, with the fact that tehy are non-predictive (false) and that they are false both logically and empirically, with the fact that the government enacts policy that is harmful and exaggerates the boom and bust in an effort to accelerate the economy, in the hopes of creating less unemployment. This is why there is such a heady and enduring debate – because our tools are knowingly unscientific, and we’re therefore experimenting with people’s money, lives and indeed their very civilization.

Furthermore, as Taleb discusses at length, and which research validates, people do not use these formula in daily life. THey rely on the general sentiments expressed by these abstract formulae and adhere to their rules of thumb (habits). Just try to find a banker who even understands the banking system. Even in the top trading desks, they operate with highly localized information.

Benefits gained from empirical analysis appear to derive entirely from asymmetry of information exposed in the models that enable modelers to profit from such necessary asymmetry — largely due to simple specialization in this complex division of knowledge and labor. Not from the predictive properties of the models.