Wednesday, July 30, 2014

That is the title of my first quarter-of-a-publication, in the Journal of Economic Perspectives. The issue includes symposia on entrepreneurship, development, and academic production, as well as some other content. JEP articles are always ungated, courtesy of the AEA. I suspect that the excellent Timothy Taylor will do a full write-up of the issue soon. This particular journal is meant for generalists, and as such the papers tend to be largely nontechnical and digestible (and interesting!).

Our article has been in circulation as a working paper in various versions for maybe 2 or 3 years, and during that time the general topic of declining entrepreneurship and dynamism generally has become pretty widely known outside of academia. I have blogged on this stuff too many times to link. We're also already working on a follow-up study that looks at things like high-growth entrepreneurship, differences between public and private firms, some specific sectors, and trends in the nature of "shocks" hitting firms. I've blogged a bit about that newer paper, here and here; it is still in early stages.

In the JEP paper, we describe some data on the dynamics of young firms, how the growth rate distribution of firm cohorts evolves as they age, the role of young firms in productivity growth (see also here), and some long-term trends that are pretty widely known now. We do some simple accounting exercises to determine the degree to which composition effects are driving long-term trends in gross job flows. Some basic insights and findings:

- New firms experience a strong "up or out" dynamic--a few grow very quickly and survive, while the rest shrink and fail (see Figure 1 in the paper, below, click for larger version). As such, many of the jobs created by startups are destroyed in short order. This is pretty well known in this literature.

I do not own this image

- The growth rate distribution of young firms is highly skewed, with some firms growing very quickly and pulling up the mean. Among older firms, the growth rate distribution is symmetric with a mean and median of zero (see Figure 2 in the paper).

- Startups, and reallocation more generally, play a huge role in productivity growth. We discuss this in some detail, and I covered it a bit here; we really just review existing research.

- In shift-share analysis, the aging of the firm distribution "accounts" for about one third of the decline in gross job flows. Changing industry composition (away from manufacturing and toward retail and services) works the "wrong way", since we have moved toward more activity in more volatile industries. When we absorb age, size, and industry composition effects, we "explain" about 15 percent of the decline (note, though, that this is not causal analysis). This means that the decline is happening within cells, and a good explanation for it has yet to be found. As such, policy implications of what we know right now are unclear.

- The decline in dynamism is relentless, indefatigable, indisputable, and undeniable (Yorke 2006), and it is ubiquitous across industry and geography. This suggests that simple policy explanations may not get us very far.

- We conceptualize the question in terms of standard models of firm dynamics, which would suggest that a decline of this kind means either (a) a decline in the volatility of shocks that drive firm outcomes, or (b) a decline in the responsiveness of firms to these shocks (which could be driven by, e.g., technology or policy changes). Our newer working paper sheds some light on this problem.

We write,

We do not yet fully understand the causes of the decline in indicators of business dynamism and entrepreneurship, nor in turn, their consequences. Improving our understanding of the causes and consequences should be a high priority. . . .

The declining pace of startups, job creation, and job destruction is mirrored in other measures of the dynamism of American society. . . . Taken together, there appears to be less scope for the US economy to adjust to changing economic conditions through the migration of workers, the reallocation of jobs across producers, and through the switching of workers across a given allocation of jobs.

The paper is reasonably short, nontechnical, and (I think) focused enough to be worth looking through. A lot of this literature consists of papers where you drink through a fire hose of data, but here we've tried hard to be concise (thanks in large part to excellent editors). We started with dozens of figures and tables and whittled down to just a few. When I first encountered the firm dynamics literature, I was blown away by the richness and diversity of market economies that shows up in the administrative micro data. Hopefully this paper will get others thinking about the topic.

I'm very excited about this paper. It builds a lot on work that has been done by people other than me, primarily including my coauthors John Haltiwanger, Ron Jarmin, and Javier Miranda, but also Stephen Davis, Lucia Foster, Chad Syverson, and others whom are listed at the end of the text. These people, along with others like Erik Hurst, have done and are doing a lot of really interesting work in empirical firm dynamics. In my view this is the best stuff happening in macro these days, as it utilizes large amounts of micro data on firms and establishments to explore big macroeconomic questions. For my involvement in this project I thank my generous coauthors and a series of consecutive luck shocks.

From September 2013 to December 2013, gross job gains from opening and expanding private sector establishments were 7.3 million, an increase of 290,000 jobs from the previous quarter. . . . Gross job losses from closing and contracting private sector establishments were 6.5 million, a decrease of 34,000 jobs from the previous quarter.

I like this data series, with some caveats.* If you're not familiar with this series, note that gross flows are large relative to net flows. Roughly speaking, think of the Great Recession as involving about 8.5 million net job losses. Entering and expanding business establishments create at least half that many jobs even in terrible quarters, but a recession is characterized by even larger numbers of jobs being destroying by shrinking or closing establishments.

I like to slice the data by extensive margin (opening or closing business establishments) and intensive margin (expanding or contracting business establishments). Figure 1 reports the flows of employment associated with opening and closing establishments, and Figure 2 reports actual numbers of establishments that opened or closed (click for larger images).

Figure 1

Figure 2

New establishments continue to boost net employment, keeping positive job flows ahead of closures for several quarters in a row. The latest quarter shows a slight uptick in gross flows of establishments, which many might consider to be a positive sign. But in general, total reallocation on the establishment extensive margin has been pretty constant since the end of the recession.

I would say that these numbers look promising. Establishment growth is helping drive employment recovery, and shrinking establishments are providing less "drag" over time. Total reallocation seems to be trending upward as well, so in my view these data look reasonably good.

Overall, this is a pretty good report.

Now some usual thoughts: gross flows give us an idea of where jobs are being created and destroyed, which fleshes out the net job numbers that are more popular (and timely). For policymakers, it matters whether job market problems are being driven by establishment turnover or job flows in existing establishments. In my (hasty) view, these latest numbers suggest that both margins are firing reasonably well, which was less the case a few quarters ago. Further, my prior is that the slight upward trend in total reallocation among continuing establishments is a good sign and may boost productivity somewhat.

More broadly, these data help dissuade us from thinking in representative agent terms, which is what the net numbers incline people to do. It's tempting to think that net numbers tell us about the experience of most businesses, but in reality there is a lot of heterogeneity among firms and reallocation proceeds at a high pace. In my view this complicates macro analysis somewhat, rendering simple "aggregate demand/supply" heuristics somewhat tricky.

*The BED are quarterly data provided from the BLS based on state UI data. They are released with a lag of about 8 months. Like the BDS (the dataset I usually use here), the BED basically covers the universe of private nonfarm employers; unlike the BDS, the BED is available at higher frequency and is released more quickly. BED has other drawbacks compared to the BDS, such as a more limited ability to track firms.The BLS effectively expanded the sample definition in the first quarter of 2013, and it does not appear that they have done anything to fix the time series. This is very unfortunate as it limits the usefulness of looking at time series in ways that are difficult to fully grasp. The 2013q1 observation was the most obviously affected, as it reported all establishments that were added to the sample as establishment openings. For openings data, I have replaced the 2013q1 observation with the average of 2012q4 and 2013q2. I haven't dug into the data enough to know whether users can manually correct for this over the longer run. See BLS discussion here, on the bottom of the page ("Administrative Change Affecting..."). Please, BLS, do something about these time series.It is also important to note that these numbers are seasonally adjusted, and any guess at net numbers based on the difference between two seasonally adjusted series is very, very rough. Non-SA numbers are available on the BLS website.These numbers track business establishments, which are different from firms. Costco is a firm; your local Costco store is an establishment. Most firms consist of only one establishment. The BED is not ideal for tracking firms, as it has limited ability to correctly link establishments to the firm level.

So, for example, there are two kinds of vegetables & melons: potatoes, and other vegetables & melons. There are two kinds of hotels/motels: casino hotels, and the others. There are two kinds of foam products: polystyrene, and the others. The metals examples are instructive, demonstrating the value of iron/steel, aluminum, and copper in manufacturing.

Friday, July 25, 2014

When I was a kid I collected aluminum cans from my neighbors, crushed them, and took them to a local scrapyard where I sold them for $0.40 per pound. I can still remember being all sticky after crushing all those soda and beer cans. Profits motivated me to recycle at an early age, but I had no idea that I was participating in such a massive, global industry:

In 2012, the seven thousand or so businesses that constitute the U.S. scrap recycling industry were responsible for transforming 135 million metric tons of recyclable waste into raw materials that could be made into new stuff. That's 135 million tons of iron ore, copper ore, nickel, paper, plastic, and glass that didn't have to be dug out of the ground or cut out of a forest. (44) . . .

The global scrap industry . . . created a multibillion-dollar sustainable business model that stands as one of globalization's great, green successes. (85) . . .

In 2012 Americans exported almost 22.3 million tons--or roughly 40.5 percent--of the used paper and cardboard they harvested. Of that, the majority went to China in shipping containers that otherwise would have crossed the Pacific Ocean carrying nothing more than air. It was joined by millions of tons of recycled metal and plastic, all of which went--like that paper--on what amounted to the unused portion of a round-trip ticket from China to the United States, paid for by American consumers eager for Chinese-made goods. One way or another, the boat is going back to China, and the fuel to send it there is going to be burned, whether or not the ticket is paid for. So anybody--or anything--hopping on that boat is getting what amounts to a carbon-neutral boat ride to China. . . . Of course, the same cannot be said of the weekend recycler who drives the recycling down to the local county dropoff, burning gas all the way. (87)

This is from Adam Minter's Junkyard Planet (2013), a really fun read (discussed more by Adam Ozimek here; Ozimek is how I heard about the book). It is a story about the global scrap industry, which is truly massive and serves as a reminder that markets are reasonably good at using resources efficiently, including natural resources. We all know the caveats about issues with the commons, which Minter discusses in some detail (and Ozimek mentions some of those in the context of this book).

One of the key insights from the book is that the things you probably think of when you hear the word "recycle"--the little blue bins and your selfless efforts to see that they make it to the recycling plant--are tiny compared with the massive profit-making enterprise that accounts for the bulk of recycling activity. Good intentions don't get us nearly as far as profit motives. When Minter asks a guy who runs a Christmas tree-recycling business why people get into this business, the reply is "People wanted to make money. That's all." The result, broadly speaking, is that "by the time a load of Chinese trash arrives at a landfill, very little that's reusable or recyclable is left in it" (27). And you can probably get rich if you can come up with a way to refine and sell the remaining stuff that is currently not considered to be reusable or recyclable.

The scrap industry isn't just about recycling; it also has mechanisms for reusing. Minter describes the industry (mostly Chinese) that buys broken or unused electric motors, fixes them, and resells them, and mentions this interesting idea: "The motors that used to drive U.S. [manufacturing] industry are being exported to China, refurbished, and used to drive Chinese industry" (111). There are similar markets for computer equipment, cars (of course), and lots of other stuff.

This book is a really good read. It has made me think more about entrepreneurship, how the movement of shipping containers is complicated by trade imbalances, development, and even my latest hobby, specificity. I'm not quite finished, so I may have more to say about it later.

Tuesday, July 22, 2014

It is clear that in an economy with an important need to restructure and hire workers in new sectors, unemployment is the result of the positive match surpluses that result from appropriability. . . . Thus, unemployment is an equilibrium response of the economic system, and it serves to restrain the bargaining position of workers in the presence of appropriability. This preserves the profitability of investment. . . . Periods of adjustment and intense gross hiring require high transitional unemployment to prevent surges in shadow wages. Unemployment keeps shadow wages at a level that makes job creation pay off.

This is from page 212 of Ricardo Caballero's Specificity and the Macroeconomics of Restructuring (2007), which I have found very useful. Here are charts of gross output and employment by sector in the vicinity of the Great Recession (click for larger images):

I'm just thinking; I don't have anything else to say about these right now.

Monday, July 14, 2014

That is the title of a new NBER working paper by Morris A. Davis and Stijn Van Nieuwerburgh, and I think it is going to be a chapter in something--the Macro Annual, or a handbook, or something. In my view, these two have done some of the best work on housing; see Davis' stuff here and Van Nieuwerburgh's stuff here.

The paper is really, really good.

It is a survey of the macro literature on housing. The sections of the paper are

stylized facts

housing and the business cycle

housing over the life cycle and in the portfolio

housing and asset pricing

the housing boom and bust and the Great Recession

housing policy

There's something for everyone (well, almost; I'll discuss below). Most of the sections describe a simple model that characterizes the literature on that topic, discussing the model's interesting implications and shortcomings. The paper covers a lot of ground, so it doesn't lend itself well to summarizing. Go read it if you want your mind blown.

Here's a slice:

Housing is not only an important asset in the portfolio, it also has several features that make it different from investments in financial assets. First, it is illiquid in the sense that changing the quantity of housing may take time and/or require incurring substantial transaction costs. Second, it is indivisible: A limited assortment of types and sizes are available for purchase at any time (including a minimum size). Third, home ownership and housing consumption are typically intimately linked. Most households own only one home and live in the house they own. Fourth, housing represents the main source of pledgeable capital against which households can borrow. Investment in housing is much more leveraged than investments in other financial assets and the value of owned housing limits the amount of leverage in households' portfolios. Fifth, housing is tied to a particular labor market: People usually live where they work. (24)

I wonder how many people realize just how weird housing is. In a previous post I wrote this:

For most people, an owned house is a massively concentrated, highly leveraged, totally undiversified bet on one asset class (real estate) in one geographical region. It's a long-term bet on the local labor market and natural environment. It may be a long-term bet on the owner's job match or occupation. The home purchase includes a bundle of local amenities--school district, voting district, neighbors, public administration, commute, etc.--and the new owner is making a bet about the outlook for that bundle as well.

The issue of concentration and asset class is an obsession of mine. Say Davis and Van Nieuwerburgh:

Renters and owners choose substantially different portfolios of financial assets, highlighting that conclusions drawn about optimal portfolio allocations over the life-cycle from models that do not include a rental/own housing choice may be misleading. (36)

It drives me crazy that when I read books about personal investing they rarely (if ever) mention housing as part of the portfolio allocation problem. Another point I've made in previous posts is that buying a house is buying a lifetime stream of rental inventory. That is, rather than paying for housing services as they arrive, like renters do, owners buy a massive flow of services all at once. Tenure is a pretty complicated decision! From the paper's discussion of tenure models:

Although housing is risky, driving down demand, current housing is a hedge against future housing demand shocks since price changes of housing units in the same market are correlated. . . The hedging demand dominates its inherent risk. . . . When households expect to increase their holdings of housing in the future, they buy a bigger home today in response to an increase in house-price uncertainty. If, instead, households expect to down-size in the future, they reduce their holdings of housing today in response to an increase in house-price uncertainty. (35)

What's missing?

My dissertation focuses on a question that is not covered in this paper: housing as collateral for entrepreneurship. Early in grad school, I was looking through some firm dynamics data and noticed that young firms were hit particularly hard by the Great Recession. Then I noticed that both housing and young firm activity started collapsing in 2006, leading the NBER recession date by between 9 and 21 months. I hypothesized that the collapse in the value of housing collateral could lead to a decline in entrepreneurship via a collateral channel. There are now some empirical papers finding suggestive evidence that housing and young firm activity are related (and I have some related empirical work in progress). See a summary here, see also here. This topic has received a lot more attention recently.

In 2012 I did some informal interviews with a handful of bankers. They all told me that housing collateral is important for young firms, particularly brand new ones. The bankers use earnings history to make decisions about many small business loans, but new firms have no earnings history. They must have collateral, and for many entrepreneurs a house is the only collateral lying around. One banker told me that once house prices started falling, he shut down lending to new businesses entirely. Others said that they started significantly discounting the value of housing collateral and tightened loan-to-value ratios. In short, the anecdotal evidence suggested that housing collateral mattered a lot for lending to young businesses.

I built a DSGE model to investigate the topic. In the model, there is a corporate sector but households can engage in production if they want. People can own or rent houses, and owned houses can be used as collateral for any kind of borrowing (including capital rental for your business). In early versions of the model, I took the house price as exogenous. That route taught me the limitations of partial equilibrium reasoning: when house prices receive no feedback from housing investment decisions, things can get pretty wacky. If people think house prices are about to fall, everyone can sell their house (or eat it, if possible), rent, and wait for things to bottom out, relying on cash from the sale to secure ongoing borrowing.* More broadly, general equilibrium matters for thinking about entrepreneurship. The opportunity cost of starting a business is often earning a wage; so you can get more entrepreneurship (at the extensive margin, at least) by doing things that destroy the labor market (which is consistent with some evidence; see Robert Fairlie's stuff). So the supply-side financial frictions that affected large firms can have an ambiguous effect on entreprenership generally. Housing collateral, on the other hand, directly affects firms whose balance sheets are tied to households.

You might think that building a model where entrepreneurs need collateral and housing happens to be collateral is like assuming the result. But the model doesn't have to deliver the result that lower home values reduce entrepreneurship. People could respond to the lower house prices by holding more housing (which is what happens, e.g., if housing enters utility Cobb-Douglas style), or by holding more financial savings. But, quantitatively, these options aren't enough. In model experiments, a lower house price is associated with less entrepreneurial activity. This happens despite the fact that there is an unconstrained corporate sector that can make up for lost output from missing entrepreneurs, so wages are not decimated and aggregate demand need not fall dramatically. So I can isolate the effect of housing on entrepreneurship without confounding it with a bloodbath in the rest of the economy. The paper isn't totally finished, but I think the model is generating interesting results.

Who cares? It matters if recessions that are accompanied by (or preceded by) housing sector collapse are likely to also see a collapse in young firm activity, since young firm activity is large compared with net job flows. A labor market is likely to recover from shocks more quickly if firm dynamics are robust; we don't want to have to rely on old firms as a group to generate labor market recoveries, since the gross job creation of expanding old firms is typically matched by gross job destruction of shrinking ones.

So I think its role as business collateral is another reason to care about housing.

*My partial equilibrium version of the model did teach me things, though. An exogenous house price lends itself to the interpretation of price as a technology parameter. A low house price means you can convert a few consumption/capital goods into lots of housing. In this sense, falling house prices are somewhat similar to rising TFP! This generalizes to the endogenous house price case, more or less, particularly if aggregate housing supply is somewhat inelastic. A lot of people are inclined to see falling house prices as all doom and gloom, but it's actually really good for people buying houses, and like any technology shock it can have positive spillovers for other people as well (though probably not positive on net, for homeowners). The technology interpretation also helps the case that model results for entrepreneurship are robust, since the lower price is making people better off in other ways.

Tuesday, July 8, 2014

A few weeks back, Robin Harding wrote a nice FT piece on the "productivity puzzle." Yesterday Ryan Avent wrote a really nice note about the notion that productivity growth is very unpredictable. This stuff got me thinking.

Always remember that aggregate concepts are, well, aggregated. Aggregate productivity growth can be usefully divided into (a) productivity growth within businesses and (b) the failure and exit of low-productivity businesses and the creation of high-productivity businesses. Foster, Haltiwanger, and Syverson (2008) found that in manufacturing, about a third of productivity growth comes from establishment entry and exit (the fraction is likely higher in retail).* Another reason to focus on entry is that young firms invest proportionally more in R&D (Acemoglu, et al. 2013). Further, even among incumbent businesses, the effect of firm-level productivity improvements on aggregate productivity depends on the degree to which innovating firms gain resources and non-innovators lose them; the role of reallocation in aggregate productivity growth is therefore huge (Acemoglu, et al. say it's 80 percent).

It turns out that there is lots of productivity heterogeneity among businesses, with Chad Syverson (2011) finding that the 90th percentile (in terms of productivity) is twice as productive as the 10th percentile (in manufacturing, within 4-digit industries).** In some senses, this fact reflects unrealized potential productivity growth. In a frictionless world, unproductive businesses always fail and productive ones always grow. In the real world, the correlations we need are still there, but things may be changing.

In short, productivity growth doesn't only depend on new technology . It also depends on allocation and reallocation of resources--labor, capital etc. If the reallocation machine breaks down, we might not capture all the gains of innovation (depending on the reasons for the breakdown). On the other hand, we can survive a technology growth slowdown if we get better at putting resources where they can be used most productively.

As a side note, recall the Caliendo, et al. paper that found that eliminating shipping costs could boost aggregate productivity by as much as 50 percent. I think there are reasons to be optimistic about our ability to effectively reduce distance: the growth of services, 3D printing technology, technologies that can address the last-mile problem, big data, and so forth. But letting these things work means letting resources be allocated to the firms that are pushing them.

*Note that the data used here track establishments, not firms. I'm being loose with language in this post.**This paragraph and the one preceding it borrow heavily from literature review contained in some joint work I have with Haltiwanger, Jarmin, and Miranda, forthcoming. And yeah, I have some justified Impostor Syndrome regarding that project.

Saturday, July 5, 2014

About 60 California cities and agencies have imposed mandatory water-use cutbacks, some as high as 50%. In many cases, the rules are enforced by charging higher fees for excess usage. In others, inspectors are deployed to crack down on scofflaws. . . .

Among the most aggressively monitored locales is the state capital, Sacramento. . . .

This year, the city cut outdoor watering to two days a week from three. Because only about half its homes have water meters to measure use, Sacramento must rely on inspectors to help enforce the rules.

A team of 40 inspectors working for the city's Department of Utilities investigate complaints. Sacramento, a city of 475,000, had received 7,604 water-use complaints as of June 18, said city spokeswoman Jessica Hess.

The city and other water districts, meanwhile, are offering carrots along with sticks, paying residents to replace their turf lawns with drought-resistant vegetation.

The state of California does not have enough water to meet demand. One way they could eliminate excess demand is to raise water prices. If there are externality issues, stick a tax on it. This isn't so different from "charging higher fees for excess usage." But for the most part, municipalities have instead opted for the hodge podge of costly and overlapping remedies described above. Sometimes prices are actually raised, but the timing screws up the incentives. A lot of people seem pretty upset by the restrictions--one wonders if they'd be willing to pay more if they could have more water. The guy with the landscaping business is a pretty good example of Bastiat's "unseen" costs of policy.

A couple of water districts are actually letting prices do some work (from ABC):

Two water districts and a pair of landowners in the heart of the state's farmland are making millions of dollars by auctioning off their private caches. . . .

In California, the sellers include those who hold claims on water that date back a century, private firms who are extracting groundwater and landowners who stored water when it was plentiful in underground caverns known as water banks.

This makes me wonder if the state is actually open to flexible prices, but cities aren't.* Or perhaps utilities are afraid of "price gouging" fights.** In any case, while cities are piling on unenforceable rules, asking neighbors to tattle, and hiring bureaucrats to drive around busting people, some landowners took steps to actually alleviate the drought by storing some water in hopes of selling it. It was a brave move. I wonder how many others throughout the state would have taken similar steps if they thought they could count on a price mechanism.

I don't mean to oversimplify. This is an epic public policy problem with a lot of complicated details. But it does seem likely that water is typically underpriced in the West.

*If I understand correctly, water prices are set by this organization. My assumption is that utilities request price changes and the Commission makes a decision. I would like to know more about this.**I once asked about this in a comment section at California Water Blog. A commenter responded with the suggestion that for utilities to obtain approval of rate increases, they may have to open their books to regulators--so they are hesitant to do so. My prior is that their books are already fairly open to regulators, but this could be an interesting hang up.

Wednesday, July 2, 2014

I don't know who would care to read this post. Maybe econ grad students.

Aruoba and Fernández-Villaverde have a nice paper comparing programming languages for solving a standard DSGE model. They have provided a nice public good here, and the paper is worth a look for economists who code. The headline finding is that C++ and Fortran are still the fastest, and (somewhat surprisingly) C++ is slightly faster.

I originally used Matlab for my dissertation model. It was taking a long time to solve, and people in my department finally convinced me to switch to Fortran.* The most time-intensive parts of my solution algorithm take about one tenth of the time they took in Matlab. Other parts got even bigger speedups.

A lot of people give me a hard time about Fortran and tell me I should switch to Python or something similar. The reason I won't do that is clear enough from the paper. Python is, by all accounts, a very intuitive and versatile language. But my model can sometimes take 24 hours to solve, and even multiplying that by two or three times would be very costly. To calibrate (or estimate) a model, one must solve it many times. Also, other people in my department use Fortran (it's pretty popular in macro), so there are some nice agglomeration returns. Fortran is very common in scientific computing, so there is a large library of algorithms you can take off the shelf (see, e.g., Numerical Recipes). It's a really easy language to learn--in fact, it's fairly similar to Matlab.

A common critique of Fortran (voiced by the first commenter here) is that, these days, hardware is cheap and programmers are expensive--so easier, more versatile languages are best. That's probably true in much of industry, particularly things like web design. But for tasks that require serious number crunching, and in an academic world with limited resources, hardware is still a binding constraint (and grad student labor--i.e., mine--is cheap). I've been solving my model on 180 processors. A lot of people don't have access to that kind of hardware (until a few months ago, I couldn't use more than 18). Furthermore, there are diminishing returns to parallelization: above 180, I get basically no speedup from adding workers. So I'm not even sure that better hardware could offset Fortran's speed advantage in my case. (Right now, other people in my department are probably wishing I would quit using 180 processors...).

If you are doing representative agent models, the speed differences between languages are probably irrelevant. In that case, you probably care more about ease of use and applications other than the number crunching, like making charts. Fortran is pretty bad in this department--I dump all of my output into Matlab and make charts there, and I've been meaning to move those codes over to Python or R so I won't be so reliant on license stuff. But if you plan to only do those kinds of models, Fortran is probably not the right choice. Use Dynare, which is awesome.

If you are planning to solve models with some nontrivial heterogeneity, you need to choose your language carefully. In case you don't know: in a model in which agents differ over a state space, equilibrium prices don't just fall out of a first-order condition. You have to solve for them. The usual way is to guess prices, obtain policy functions, add up everyone's choices, check market clearing, and guess again. While a rep agent model only requires you to find policy functions once, a het agent model requires you to do it many times while you search for the right prices. (A nice side effect of solving models this way is that you get to see partial equilibrium results while it solves). Computing time grows exponentially with the number of heterogeneity dimensions you have, due to the Curse. Also, the more prices you have to find, the longer it will take (here's a tip: constant returns to scale technology makes factor prices move in lockstep, so knowing one implies the other). When I went from needing to find one price to needing to find two, it more than doubled my computation time.

This stuff matters because I think some of the most interesting work being done in macro right now is the empirical stuff based on micro data. To me, heterogeneity is what makes macro interesting. The theories that have to go with the rich micro data are often going to require hard computational work.

*I'll save commenters some time and simply note that I've already heard the one about how you used Fortran in college in the 1970s. It is
somewhat funny that this language is still in wide use in scientific
computing; but it's also not a huge surprise since doing floating point
calculations over and over again doesn't require the latest bells and
whistles. We're not trying to build Instagram here. Also modern Fortran is a pretty different language from Fortran 77 (it was last updated in 2008).