Wednesday, December 18, 2013

A scuffle has broken out among some economists over the touchy topic of microfoundations, i.e. the usual formal requirement that macroeconomic models, to be considered legitimate, must be based on things like households and firms optimizing their intertemporal utility, having rational expectations, and so on. Apparently, things got kicked off by economist Tony Yates, who offered a spirited defense of microfoundations of this kind; he's really irritated that people keep criticizing them and worries that, My God!, this might cause DSGE macroeconomists to lose some credibility! In response, Simon Wren-Lewis came back with an equally spirited argument about why modelling should be more flexible and "eclectic." Noah Smith's summary of the whole disagreement puts everything in context.

Noah makes the most important point right at the end. Simply put, no one (I think) is against the authentic spirit of microfoundations, i.e. the idea that macroeconomic models ought to be based on plausible stories of how the real actors in an economy behave. If you get that right, then obviously your model might stand a chance of getting larger aggregate things right too. The problem we have today is that the microfoundations you find in DSGE models aren't like this in the least. So macromodels are actually based on things we know to be wrong. It's very strange indeed. As Noah puts it:

Yates says I just want to get rid of all the microfoundations. But that is precisely, exactly, 180 degrees wrong! I think microfoundations are a great idea!
I think they're the dog's bollocks! I think that macro time-series data
is so uninformative that microfoundations are our only hope for really
figuring out the macroeconomy. I think Robert Lucas was 100% on the
right track when he called for us to use microfounded models.

But that's precisely why I want us to get the microfoundations right.
Many of microfoundations we use now (not all, but many) are just wrong.
Obviously, clearly wrong. Lots of microeconomists I talk to agree with
me about that. And lately I've been talking to some pretty prominent
macroeconomists who agree as well.

So I applaud the macroeconomists who are working on trying to develop models with better microfoundations (here is a good example).
Hopefully the humble stuff I'm doing in finance can lead to some better
microfoundations too. And in the meantime I'm also happy to sit here
and toss bombs at people who think the microfoundations we have are good
enough!

I couldn't agree more.

In fact, before coming across this debate this morning, I had intended to make a short post linking to the very informative lecture (below, courtesy of Mark Thoma) by macroeconomist George Evans. Lord knows I spend enough time criticizing economists -- and this recent post discussed the limitations of the learning literature, in which Evans has been a key player -- so I want to make clear that I do admire the things he does.

He tells an interesting story about an economic model (a standard New Keynesian model) that -- when the agents in the model learn in a particular constrained way -- has two different equilibria. One is locally stable and the economy has inflation right around a targeted value. Start out with inflation and consumption and expectations close to that equilibrium and you'll move toward that point over time. The second equilibrium is, however, unstable. If you start out sufficiently far away from the stable equilibrium, you won't go there at all, but will wander down into a deflationary zone (and what happens then I don't know).

This model for aggregate behaviour is based on some fairly simple low-dimensional equations for how current consumption and inflation feed, via expectations, into future values and a trajectory for the economy. I don't know how plausible these equations are. I'm guessing that someone can make a good argument about why they should have the form they do (or a similar form). That story would involve references to how things happening now in the economy would influence peoples' behaviour and their expectations, and then how these would cause certain kinds of changes. To really believe this you'd want to see some evidence that this story is correct, i.e. that people, firms, etc., really do tend to behave like this.

The point I want to make is that -- for someone like myself who has not been socialized to accept the necessity of what currently counts as "microfoundations" -- nothing about the story becomes more plausible when I wade into the equations of the New Keynesian model and see how households and firms independently optimize their intertemporal utilities subject to certain budget constraints. If anything, seeing all this dubious stuff makes me less likely to believe in the plausibility of the low dimensional equations for aggregate variables. And this is precisely the problem with microfoundations of this kind. They don't give a good argument for why the aggregate variables should satisfy these equations. They give a very bad. An unconvincing argument. At least for me.

Tuesday, December 17, 2013

A great article in the New York Review of Books by US District Court Judge Jed Rakoff, asking pointed questions about why no executives of major financial institutions have been convicted of fraud for their actions in the lead up to the financial crisis. As he argues, the Commission set up to explore the crisis found plenty of evidence of widespread fraud; the Justice Department has simply failed to act. Why?

...the stated opinion of those government entities asked to examine the
financial crisis overall is not that no fraud was committed. Quite the
contrary. For example, the Financial Crisis Inquiry Commission, in its
final report, uses variants of the word “fraud” no fewer than 157 times
in describing what led to the crisis, concluding that there was a
“systemic breakdown,” not just in accountability, but also in ethical
behavior.

As the commission found, the signs of fraud were
everywhere to be seen, with the number of reports of suspected mortgage
fraud rising twenty-fold between 1996 and 2005 and then doubling again
in the next four years. As early as 2004, FBI
Assistant Director Chris Swecker was publicly warning of the “pervasive
problem” of mortgage fraud, driven by the voracious demand for
mortgage-backed securities. Similar warnings, many from within the
financial community, were disregarded, not because they were viewed as
inaccurate, but because, as one high-level banker put it, “A decision
was made that ‘We’re going to have to hold our nose and start buying the
stated product if we want to stay in business.’”

Without giving
further examples, the point is that, in the aftermath of the financial
crisis, the prevailing view of many government officials (as well as
others) was that the crisis was in material respects the product of
intentional fraud. In a nutshell, the fraud, they argued, was a simple
one. Subprime mortgages, i.e., mortgages of dubious creditworthiness,
increasingly provided the chief collateral for highly leveraged
securities that were marketed as AAA, i.e.,
securities of very low risk. How could this transformation of a sow’s
ear into a silk purse be accomplished unless someone dissembled along
the way?

While officials of the Department of
Justice have been more circumspect in describing the roots of the
financial crisis than have the various commissions of inquiry and other
government agencies, I have seen nothing to indicate their disagreement
with the widespread conclusion that fraud at every level permeated the
bubble in mortgage-backed securities. Rather, their position has been to
excuse their failure to prosecute high-level individuals for fraud in
connection with the financial crisis on one or more of three grounds:

Rakoff goes on the examine these grounds, and finds none of them convincing. So, then, why no prosecutions? He discounts the revolving door theory -- the prosecutors have avoided action because of former links to financial firms, or hope of future employment links -- because, in his experience, prosecutors are well motivated to get convictions. He suggests that there are a host of reasons: prosecutors simply had other priorities in the years after 9/11/2001; in many cases government regulators acquiesced early on to changing practices, where in increasingly lax demands on mortgage documentation became the norm. And, finally and most importantly, he points to changes in prosecuting practices over the past few decades:

The final factor I would mention is both the most
subtle and the most systemic of the three, and arguably the most
important. It is the shift that has occurred, over the past thirty years
or more, from focusing on prosecuting high-level individuals to
focusing on prosecuting companies and other institutions. It is true
that prosecutors have brought criminal charges against companies for
well over a hundred years, but until relatively recently, such
prosecutions were the exception, and prosecutions of companies without
simultaneous prosecutions of their managerial agents were even rarer.

The
reasons were obvious. Companies do not commit crimes; only their agents
do. And while a company might get the benefit of some such crimes,
prosecuting the company would inevitably punish, directly or indirectly,
the many employees and shareholders who were totally innocent.
Moreover, under the law of most US jurisdictions, a company cannot be
criminally liable unless at least one managerial agent has committed the
crime in question; so why not prosecute the agent who actually
committed the crime?

In recent decades, however, prosecutors have
been increasingly attracted to prosecuting companies, often even without
indicting a single person. This shift has often been rationalized as
part of an attempt to transform “corporate cultures,” so as to prevent
future such crimes; and as a result, government policy has taken the
form of “deferred prosecution agreements” or even “nonprosecution
agreements,” in which the company, under threat of criminal prosecution,
agrees to take various prophylactic measures to prevent future
wrongdoing. Such agreements have become, in the words of Lanny Breuer,
the former head of the Department of Justice’s Criminal Division, “a
mainstay of white-collar criminal law enforcement,” with the department
entering into 233 such agreements over the last decade. But in practice,
I suggest, this approach has led to some lax and dubious behavior on
the part of prosecutors, with deleterious results.

If you are a prosecutor attempting to discover the individuals
responsible for an apparent financial fraud, you go about your business
in much the same way you go after mobsters or drug kingpins: you start
at the bottom and, over many months or years, slowly work your way up.
Specifically, you start by “flipping” some lower- or mid-level
participant in the fraud who you can show was directly responsible for
making one or more false material misrepresentations but who is willing
to cooperate, and maybe even “wear a wire”—i.e., secretly record his
colleagues—in order to reduce his sentence. With his help, and aided by
the substantial prison penalties now available in white-collar cases,
you go up the ladder.

But if your priority is prosecuting the
company, a different scenario takes place. Early in the investigation,
you invite in counsel to the company and explain to him or her why you
suspect fraud. He or she responds by assuring you that the company wants
to cooperate and do the right thing, and to that end the company has
hired a former assistant US attorney, now a partner at a respected law
firm, to do an internal investigation. The company’s counsel asks you to
defer your investigation until the company’s own internal investigation
is completed, on the condition that the company will share its results
with you. In order to save time and resources, you agree.

Six
months later the company’s counsel returns, with a detailed report
showing that mistakes were made but that the company is now intent on
correcting them. You and the company then agree that the company will
enter into a deferred prosecution agreement that couples some immediate
fines with the imposition of expensive but internal prophylactic
measures. For all practical purposes the case is now over. You are happy
because you believe that you have helped prevent future crimes; the
company is happy because it has avoided a devastating indictment; and
perhaps the happiest of all are the executives, or former executives,
who actually committed the underlying misconduct, for they are left
untouched.

I suggest that this is not the best way to proceed. Although it is
supposedly justified because it prevents future crimes, I suggest that
the future deterrent value of successfully prosecuting individuals far
outweighs the prophylactic benefits of imposing internal compliance
measures that are often little more than window-dressing. Just going
after the company is also both technically and morally suspect. It is
technically suspect because, under the law, you should not indict or
threaten to indict a company unless you can prove beyond a reasonable
doubt that some managerial agent of the company committed the alleged
crime; and if you can prove that, why not indict the manager? And from a
moral standpoint, punishing a company and its many innocent employees
and shareholders for the crimes committed by some unprosecuted
individuals seems contrary to elementary notions of moral
responsibility.

These criticisms take on special relevance,
however, in the instance of investigations growing out of the financial
crisis, because, as noted, the Department of Justice’s position, until
at least recently, is that going after the suspect institutions poses
too great a risk to the nation’s economic recovery. So you don’t go
after the companies, at least not criminally, because they are too big
to jail; and you don’t go after the individuals, because that would
involve the kind of years-long investigations that you no longer have
the experience or the resources to pursue.

In conclusion, I want
to stress again that I do not claim that the financial crisis that is
still causing so many of us so much pain and despondency was the
product, in whole or in part, of fraudulent misconduct. But if it was—as
various governmental authorities have asserted it was—then the failure
of the government to bring to justice those responsible for such
colossal fraud bespeaks weaknesses in our prosecutorial system that need
to be addressed.

Friday, December 13, 2013

For many economists, prevailing theories of macroeconomics based on
the idea of rational expectations (RE) are things of elegance and beauty,
somewhat akin to the pretty face you see below. This is especially their view in the light of
two decades of research looking at learning as a foundation for
macroeconomics -- something economists refer to as the "learning
literature." You'll hear it said that these studies have shown that most
of the RE conclusions also follow from much more plausible assumptions
about how people form expectations and adjust them over time by learning
and adapting. Sounds really impressive.

As with this apparently pretty face, however, things aren't actually so beautiful and elegant if you take the time to read some of the papers in the learning literature and see what has been done. For example, take this nice review article by Evans and Honkapohja from a few years ago. It is a nice article and reports on interesting research. If you study it, however, you'll find that the "learning" studied in this line of work is not at all what most of us would think of as learning as we know it in the real world. I've written about this before and might as well just quote something I said there:

What the paper does is explore what happens in some of the common
rational expectations models if you suppose that agents' expectations
aren't formed rationally but rather on the basis of some learning
algorithm. The paper shows that learning algorithms of a certain kind
lead to the same equilibrium outcome as the rational expectations
viewpoint. This IS interesting and seems very impressive. However, I'm
not sure it's as interesting as it seems at first.

The reason is that the learning algorithm is indeed of a rather special
kind. Most of the models studied in the paper, if I understand
correctly, suppose that agents in the market already know the right mathematical form
they should use to form expectations about prices in the future. All
they lack is knowledge of the values of some parameters in the equation.
This is a little like assuming that people who start out trying to
learn the equations for, say, electricity and magnetism, already know
the right form of Maxwell's equations, with all the right space and time
derivatives, though they are ignorant of the correct coefficients. The
paper shows that, given this assumption in which the form of the
expectations equation is already known, agents soon evolve to the
correct rational expectations solution. In this sense, rational
expectations emerges from adaptive behaviour.

I don't find this very convincing as it makes the problem far too easy.
More plausible, it seems to me, would be to assume that people start out
with not much knowledge at all of how future prices will most likely be
linked by inflation to current prices, make guesses with all kinds of
crazy ideas, and learn by trial and error. Given the difficulty of this
problem, and the lack even among economists themselves of great
predictive success, this would seem more reasonable. However, it is also
likely to lead to far more complexity in the economy itself, because a
broader class of expectations will lead to a broader class of dynamics
for future prices. In this sense, the models in this paper assume away
any kind of complexity from a diversity of views.

To be fair to the authors of the paper, they do spell out their
assumptions clearly. They state in fact that they assume that people in
their economy form views on likely future prices in the same way modern
econometricians do (i.e. using the very same mathematical models). So
the gist seems to be that in a world in which all people think like
economists and use the equations of modern econometrics to form their
expectations, then, even if they start out with some of the coefficients
"mis-specified," their ability to learn to use the right coefficients
can drive the economy to a rational expectations equilibrium. Does this
tell us much?

My view is that NO, it doesn't tell us much. It's as if the point of the learning literature hasn't really been to explore what might happen in macroeconomics if people form expectations in psychologically realistic ways, but to see how far one can go in relaxing the assumptions of RE while STILL getting the same conclusions. Of course there's nothing wrong with that as an intellectual exercise, but it hardly a full bore effort to understand economic reality. It's more an exercise in theory preservation, examining the kinds of rhetoric RE theorists might be able to use to defend the continued use of their favorite ideas. "Yes, if we use the word 'learning' in a very special way, we can say that RE theories are fully consistent with human learning!"

Anyway, I've revisited this idea in my most recent Bloomberg column, which should appear this weekend. I find it quite irritating that lots of economists go on repeating this idea that the learning literature shows that it's OK to use RE when in fact it does nothing of the sort. You often find this kind of argument when economists smack down their critics, implying that those critics "just don't know the literature." In an interview from a few years ago, for example, Thomas Sargent suggested that criticism of excessive reliance on rationality in macroeconomics reflects "... either woeful ignorance or intentional disregard for what much
of modern macroeconomics is about and what it has accomplished."

On a more thoughtful note, Oxford economist Simon Wren-Lewis also recently defended the RE assumption (responding to a criticism by Lars Syll), mainly arguing that he hasn't seen any useful alternatives. He also refers to this allegedly deep learning literature as a source of wisdom, although he does acknowledge that it's aim has been fairly limited:

...If I really wanted to focus in detail
on how expectations were formed and adjusted, I would look to the large
mainstream literature on learning, to which Professor Syll does not refer. (Key
figures in developing this literature included Tom Sargent, Albert Marcet,
George Evans and Seppo Honkapohja: here is a nice interview involving three of
them.) Macroeconomic ideas derived from rational expectations models should
always be re-examined within realistic learning environments, as in this paper
by Benhabib, Evans and Honkapohja for
example. No doubt that literature may benefit from additional insights
that
behavioural economics and others can bring. However it is worth noting
that a key organising device for much of the learning literature is the
extent to which learning converges towards rational expectations.

However
most of the time macroeconomists want to focus on
something else, and so we need a simpler framework. In practice that
seems to
me to involve a binary choice. Either we assume that agents are very
naive, and
adopt something very simple like adaptive expectations (inflation
tomorrow will be based on current and past inflation), or we assume
rational
expectations. My suspicion is that heterodox economists, when they do
practical
macroeconomics, adopt the assumption that expectations are naive, if
they exist
at all (e.g. here). So I want to explain why, most of the
time, this is the wrong choice. My argument here is similar but complementary to a recent piece by Mark Thoma on rational expectations.

As I said above, Wren-Lewis is one of the more thoughtful defenders of RE, but he too here lapses into the use of "learning" without any qualification.

More importantly, however, I'm not sure why Wren-Lewis thinks there is only a binary choice. After all, there is a huge range of alternatives between simple naive expectations and RE and this is precisely the range inhabited by real people. So why not look there? Why not actually look to the psychology literature on how people learn and use some ideas from that? Or, why not do some experiments and see how people form expectations in plausible economic environments, then build theories in that way?

This kind of work can and is being done. My Bloomberg column touches briefly on this really fascinating paper from earlier this year by economist Tiziana Assenza and colleagues. What they did, briefly, is to run experiments with volunteers who had to make predictions of inflation (and sometimes also the output gap) in a laboratory economy. The economy was simple: the volunteers' expectations fed into determining the economic future outcomes for inflation etc. by a simple low dimensional set of equations known perfectly to the experimenters, but NOT known by the volunteers. So the dynamics of the economy here were made artificially simple, and hence easier to learn than they would be in a real economy; but the volunteers weren't given any crutch to help them learn, such as full knowledge of the form of the equations. They had to, gasp, learn on their own! In a series of experiments, Assenza and colleagues then measured what happened in the economy -- did it settle into an equilibrium, did is oscillate, etc. -- and also could closely study how people formed expectations and whether their expectations converged to some homogeneous form or stayed heterogeneous. Did they eventually converge to rational expectations? Umm, NO.

From their conclusions:

In this paper we use laboratory experiments with human subjects to study individual expectations, their interactions and the aggregate behavior they co-create within a New Keynesian macroeconomic setup and we fit a heterogeneous expectations switching model to the experimental data. A novel feature of our experimental design is that realizations of aggregate variables depend on individual forecasts of two different variables, the output gap and inflation. We find that individuals tend to base their predictions on past observations, following simple forecasting heuristics, and individual learning takes the form of switching from one heuristic to another. We propose a simple model of evolutionary selection among forecasting rules based on past performance in order to explain individual forecasting behavior as well as the different aggregate outcomes observed in the laboratory experiments, namely convergence to some equilibrium level, persistent oscillatory behavior and oscillatory convergence. Our model is the first to describe aggregate behavior in a stylized macro economy as well as individual micro behavior of heterogeneous expectations about two different variables. A distinguishing feature of our heterogeneous expectations model is that evolutionary selection may lead to different dominating forecasting rules for different variables within the same economy, for example a weak trend following rule dominates inflation forecasting while adaptive expectations dominate output forecasting (see Figs. 9(c) and 9(d)).

We also perform an exercise of empirical validation on the experimental data to test the model’s performance in terms of in-sample forecasting as well as out-of-sample predicting power. Our results show that the heterogeneous expectations model outperforms models with homogeneous expectations, including the rational expectations benchmark. [MB: In the paper they actually found that the RE benchmark provided the WORST fit of any of several possibilities considered.]

In the experiments, real learning behavior led to a range of interesting outcomes in this economy, including persistent oscillations in inflation and economic output without any equilibrium, or extended periods of recession driven by several distinct groups clinging to very different expectations of the future. Relaxing the assumption of rational expectations turns out not to be a minor thing at all. Include realistic learning behavior in your models, and you get a realistically complex economy that is very hard to predict and control, and subject to many kinds of natural instability.

One of the most important things here is that the best way to generate behavior like that observed in this experimental economy was in simulations in which agents formed their expectations through an evolutionary process, selecting from a set of heuristics and choosing whichever one happened to be working well in the recent past. This builds on earlier work of William Brock and Cars Hommes, the latter being one of the authors of the current paper. It also builds, of course, on the early work from the Santa Fe Institute on adaptive models of financial markets, which uses a similar modelling approach.

So, here we do have an alternative to rational expectations, one that is both far more realistic in psychological terms, and also more realistic in generating the kinds of outcomes one sees in real experiments and real economies. Wonderful. Economists are no longer stuck with their RE straitjacket, but can readily begin exploring the kinds of things we should expect to see in economies where people act like real people (of course, a few economists are doing this, and Assenza and colleagues give some recent references in their paper).

I think this kind of thing is much more deserving of being called a "learning literature." I don't know why it doesn't get more attention.

Wednesday, December 11, 2013

Larry Summers recently made some waves with his proposal that maybe we're in a new era of "secular stagnation," in which low growth is the norm, and much of it comes through temporary and artificial bubbles. Paul Krugman backed the idea here. At face value, it all seems somewhat plausible, but also sounds a lot like a "just so" story to cover up and explain why economic policy and low interest rates haven't been enough to encourage new growth, yet also haven't caused inflation. It also fits together quite well with the usual stories told about the failure of ordinary policy at the zero lower bound, which turns ordinary economics on its head.

Now, I don't want to say that story is completely wrong, but remember that it comes out of quite standard macro analyses based on representative agent models with individuals and firms optimizing over time, and where -- perhaps most importantly -- things like debt overhang do not enter in any way into explaining how people are behaving (and why they may be hugely risk averse). That should be enough to raise some major questions about the plausibility of the story, especially in the aftermath of the biggest financial crisis in a century. For a lot more on such doubts, see the illuminating recent paper Stable Growth in an Era of Crises by Joseph Stiglitz.

But also see this convincing counterargument by some analysts at Independent Strategy, as discussed in the Financial Times. From Izabella Kaminska's discussion:

From the note, their main points are:

• There is no shortage of high return investment projects
in the world. And the dearth of global corporate investment, which
drove the great recession, means that productive potential is shrinking
despite corporate profitability, leverage and cash balances being sound.

• The three ingredients for growth are a) a stable macro environment;
b) a sound banking system; c) economic reforms that encourage
entrepreneurship. What is missing right now is private sector confidence
in the ability of governments and central bankers to provide all three.

• Credit bubbles can boost growth only temporarily and incur heavy
costs in terms of subsequent deleveraging and misallocation of
resources.

And expanding a bit further, they add:

Secular stagnation is a myopic and short-term view for
two reasons. First, it is based on the experience of the Anglo-Saxon
economies and parts of Europe currently as well as Japan since the
bursting of the bubble at the start of the 1990s. Krugman muses that
interest rates should be set at the growth rate of populations, because
they would then be equal to a society’s potential capital productivity
(and the long-term return on it). But the change in population growth is
less relevant than the rise in productivity of an expanding workforce.

Take Germany: its population is ageing and its net population growth
is slowing to a trickle (although that may be improved by increased net
immigration from southern and eastern Europe). But Germany’s
productivity level and growth is high (as is total factor productivity,
expressing the gains from technology). Italy has a similar stagnation in
its working population, but its real GDP growth has disappeared because
of the fall in total factor productivity — Figure 1.

Actually, this push back isn't really surprising. It's what you get if you take the longer historical view, rather than trying to make excuses for why economic theory still can't make sense of things (the theory is poor, that's why!). As a couple of economists from Goldman Sachs noted just after Summers' speech:

"Our view of the recent weakness is more cyclical than secular... The slow rate of recovery in recent years is roughly in line
with the performance of other economies following major financial
crises, as shown by Reinhart and Rogoff, and the reasons for the weakness in aggregate demand over the last few years have now begun to diminish."

This refers to the great book by Reinhart and Rogoff, by the way, not their other discredited paper.

Saturday, December 7, 2013

For those not following along with recent physics and materials science, the newest wonder material is graphene, made of two-dimensional sheets of carbon just one atom thick. The 2010 Nobel Prize in physics was awarded for its discovery. Here's a short primer on why it's cool because of its amazing strength, conductivity and flexibility. To be honest, there's a long road ahead in making practical devices from the stuff -- this article in Nature from a couple weeks ago surveyed the promise and obstacles -- but the potential is huge.

Now, how about this for irony: a paper just out in Nature Communications shows that graphene quantum dots can be made in a very easy one-step process from ordinary coal. A quantum dot is like an artificial atom, and can be engineered to absorb/emit light at precise frequencies. Isn't it ironic that the cheap stuff we're burning all over the globe just for crude energy may be a great source for one of the most amazing materials we've ever discovered? Here's the abstract of the paper:

Coal is the most abundant and readily combustible energy resource being
used worldwide. However, its structural characteristic creates a
perception that coal is only useful for producing energy via burning.
Here we report a facile approach to synthesize tunable graphene quantum
dots from various types of coal, and establish that the unique coal
structure has an advantage over pure sp2-carbon allotropes for producing quantum dots. The crystalline carbon within the coal structure is easier to oxidatively displace than when pure sp2-carbon structures are used, resulting in nanometre-sized graphene quantum dots with amorphous carbon
addends on the edges. The synthesized graphene quantum dots, produced
in up to 20% isolated yield from coal, are soluble and fluorescent in
aqueous solution, providing promise for applications in areas such as
bioimaging, biomedicine, photovoltaics and optoelectronics, in addition
to being inexpensive additives for structural composites.

Friday, December 6, 2013

Something a little weird happened with my latest Bloomberg column, which appeared last Monday only to disappear pretty much instantaneously and for mysterious reasons. After some investigation, it seems that a "code" was missing in the html or in who knows what other language. Anyway, it's there now.

The topic is Obamacare. Briefly, the theme is that healthcare isn't something we should expect to be easily solved through markets (think of A Market for Lemons and that theme about market failures in similar situations). Economists, I think, mostly know this; some things can be better organized by government. Why doesn't that message get out? Or don't economists really believe this? I can't tell.

Anyway, everyone should have a look at two things that go much deeper than my piddling little column:

1. A great recent article by Michael Sandel that examines how and why markets are often anything but value free; making a market for something often changes how we think about and value that thing, with huge implications for whether markets are beneficial or not. An important point he makes is that whether something should be left to the market IS NOT a question of economics; it always (or almost always) involves consideration of values far broader than economic efficiency and so goes way outside economists' claimed area of expertise.

I think Sandel is right. And I think lots of people are starting to realise this. Even the Pope!!

2. A second thing worth reading is a fascinating book from 1976 by Fred Hirsch, The Social Limits to Growth. I'd never heard of it before reading Sandel's article, which is a little embarrassing, as I'm sure every graduate student in economics has read it as part of their ordinary training. I have a lot to learn. It is a fascinating book, a real classic, and suggests that some of our basic psychological and social behaviors must have long-term effects on our economic well being that the usual theories of markets completely miss. Kind of obvious when you say it like that, but this is economics.... people have tried very hard to deny the obvious.... Hirsch tried hard not to...

And, for a short excellent primer on Hirsch, see this by someone in the philosophy department of the University of Manitoba, or connected to that department, or a dog of someone in the department.... I have no idea who. But it's written very clearly and I admire it.

Thursday, December 5, 2013

The one thing about modern macroeconomics that I find really hard to comprehend is that theorists seem to jump through hoops to get models with a certain kind of mathematical consistency in them, even though this guarantees that the models make very little contact with reality (and real data). In fact, it guarantees that these theories are based on sweeping assumptions that we know not to be true in the real world. I've written about this mystery before. A theory having "microfoundations" -- which macro theories are supposed to have to be considered respectable -- is thereby a theory that we know is not consistent with what we know about real human behavior and economic reality. Economists, it seems, only allow themselves to believe in things they know NOT to be true!! How wonderful and counter-intuitive! (I can see why the field must be appealing.)

Imagine if the engineers building and managing modern GPS and other global navigation systems were, for bizarre historical reasons, constrained to go on using a map of the globe essentially like that pictured above: The Square and Stationary Earth of Professor Orlando Ferguson.

Search This Blog

This blogexplores the potential for the transformation of economics and finance through the inspiration of physics and the other natural sciences. If traditional economics has emphasized self-regulation and market equilibrium, the new perspective emphasizes the myriad positive feed backs that often drive markets away from equilibrium and cause tumultuous crashes and other crises. Read more about the idea.

Who am I?

Physicist and science writer. I was formerly an editor with the international science journal Nature and also the magazine New Scientist. I am the author of three earlier books, and have written extensively for publications including Nature, Science, the New York Times, Wired and the Harvard Business Review. I currently write monthly columns for Nature Physics and for Bloomberg Views.