Share this:

“Equitable Growth in Conversation” is a recurring series where we talk with economists and other social scientists to help us better understand whether and how economic inequality affects economic growth and stability.

In this installment, Equitable Growth Research Economist Ben Zipperer talks with economists David Card and Alan Krueger. Their discussion touches on the origins of empirical techniques they advanced, how the United States is falling behind when it comes to data, and two conflicting threads of contemporary economic theory.

Read their conversation below.

Ben Zipperer: A common theme in both of your work involves isolating specific interventions or plausibly exogenous changes in the phenomena you’re studying, say in the case of your famous study comparing restaurants in New Jersey and Pennsylvania after a minimum wage increase. What kind of challenges did you face early on in that research—in the days before words or phrases like “research design” and “natural experiment” were kind of ubiquitous terms in the field of economics?

And then also, can you talk a little bit about the influence of the quasi-experimental approach on labor economics today and maybe the field of economics as a whole?

David Card: There are several origin stories that meet sometime in the late ’80s, I would say, in Princeton. One part of the origin story would be Bob LaLonde’s paper on evaluating the evaluation methodologies. So, in the 1970s, if you were taking a class in labor economics, you would spend a huge amount of time going through the modeling section and the econometric method. And ordinarily, you wouldn’t even talk about the tables. No one would even really think of that as the important part of the paper. The important part of the paper was laying out exactly what the method was.

But there was an underlying current of how believable are these estimates, what exactly are we missing. And some of that came to the fore in LaLonde’s paper.

He was a grad student at Princeton in the very first cohort that I advised: He was actually a grad student when I was a grad student, but he was a couple years behind me. Then I was his co-adviser with Orley Ashenfelter. And in the course of doing that work, it became pretty obvious that these methods were very, very sensitive: If you played around with them, you got different answers.

The impetus of that paper was some work that Orley and I were asked to do evaluating the old CETA programs. There were a bunch of different methods that were around and they would give very different answers. So Orley had the idea of setting Bob on that direction and that really evolved that way.

So that was one part of the origin story. Another part was the move from macro-type evidence to micro evidence. There was growing appreciation of that. And the first person that I saw really use the phrase “natural experiment” was Richard Freeman.

Alan Krueger: That’s who I learned it from, too. Richard always had an interest in evidence-based natural experiments. He was an enormous fan of the work by LaLonde; also, the paper Orley did in JASA [the Journal of the American Statistical Association] on the negative income tax experiment. Richard always had a soft spot for natural experiments. But I think he used the term differently than we would.

He applied it to big shocks. So to him, the passage of the Civil Rights Act was a natural experiment. The tight labor market in the 1960s was another natural experiment. I think the way he viewed it was a bit different from the way it started to get applied, which was that the world opened up and made a change for some group that could be viewed as random. When Josh Angrist and I looked at compulsory schooling, we looked at a small change. The natural experiment was just being born on one side or the other of the threshold for starting school, which then affected how many years of education you ultimately got because of different compulsory schooling laws and students would reach the minimum schooling age in different grades.

But that’s where I first heard the term.

Card: Right. And you mentioned research design. I remember Alan was an assistant professor and I was a professor at Princeton and Alan sat next to me. And he, for some reason, got a subscription to the New England Journal of Medicine. (Laughter.) And —

Zipperer: Intentionally?

Krueger: Yeah. I loved reading the New England Journal of Medicine.

Card: Yeah. And the New England Journal would come in every week, so there was a lot of stuff to read. And the beginning of each article would have “research design.”

Krueger: And “methods.”

Card: Yes, and if you’ve never seen that before and you were educated as an economist in the 1970s or 1980s, that just didn’t make any sense. What is research design? And I remember one time I said, “I don’t think my papers have a research design.”

And so that whole set of terms entered economics as a result of those kinds of changes in orientation. But I would say that another thing that happened was that Bob LaLonde got a pretty good job and his paper got a lot of attention. And then Josh Angrist, again following up a suggestion from Orley to look at the Vietnam draft—that paper got a lot of attention. And it looked like there was a market, in a way, for this new style of work. It’s not like we were trying to sell something that no one wanted. There was actually a market out there generally, in the labor economics field, at least.

Krueger: There was, but there was also resistance. (Laughter.)

I agree with everything David said. The other thing—which I think helped to support this, although maybe it gets overrated—is that data became more available, and big datasets like the Census were easier to use.

Historically, when the 1960 Census microdata first became available, Jacob Mincer used it and had an enormous impact. And I think the fact that we were inventorying more data meant that if you wanted to look at a natural experiment – for example, a change in social security benefits which affected one cohort and not another — the data were out there to do it.

I think another thing — which was a bit new when we did it for our American Economic Review article on the minimum wage — was to go out and collect our own data when we saw the opportunity to study a natural experiment. But in other situations the fact that there were just data out there to begin with, I think, helped this movement.

Card: Yeah. That was the case with my Mariel Boatlift paper. It was written a little bit before we started working on minimum wages. And in that case, it just so happened that the Outgoing Rotation Group files were available starting in 1979. And so, with those files, it was fairly straightforward to do an analysis of what affected even the Miami labor market.

And in retrospect there’s a new paper by George Borjas flailing around trying to overturn the results in my paper. But in truth, if somebody had been on the ground in Miami in 1980 and gotten their butts in gear, there would have been so much more interesting stuff to do.

For instance, when Hurricane Andrew happened, people actually convinced the CPS to do a survey or supplement, right?

Krueger: Yes.

Card: So, I think the whole, not just the profession, but even maybe the government, has become a little bit more aware of the importance of really strategically moving resources around and collecting data.

And now the administrative data is available for some things as well.

Zipperer: Speaking of data access, how important do you think it is now for work on the research frontier of labor economics, say, to have administrative data access, or access to often-restricted-access datasets? Is the United States positioned as a leader in this? Or are we paling in comparison to other countries?

Card: Well, we’ve got a lot of disadvantages. One problem is that we don’t have a centralized statistical agency. And so you’ll forever run into someone who wants to do a project and they’re not able to do it because there’s a bureaucratic obstacle to using this particular dataset or that particular dataset.

So for example, matching the LEHD [Longitudinal Employer-Household Dynamics] data to the Census of manufactures or the Census of firms. That would be a natural thing to do, but not that easy to do. If it was one statistical agency, we would have a lot more ease.

And then the laws of the United States—not just the federal but then the state laws—governing access to, say, the UI [unemployment insurance] files. Partially, those are available to the Feds when they’re constructing the LEHD data or other types of datasets, but they’re not available to individual researchers.

Although Alan and I have both used, for example, data from New Jersey. So individual researchers can, in some cases, contact the state and get some help. But that often requires some combination of a person on the other side who actually wants to answer the phone and talk to you, and maybe some resources.

Krueger: Yes, so I would say we’re behind other countries in terms of administrative datasets. We’ve long been behind Scandinavia, which has provided linked data for decades. And we’re now behind Germany, where a lot of interesting work is being done.

And it’s unfortunate because we did lead the world, I would say, in labor force surveys. The rest of the developed world copied our labor force survey and copied our practice of making the data available for researchers to use.

It’s much more cumbersome, bureaucratic, and idiosyncratic here to get access to the administrative data. And I don’t think that’s good for American economists or for studies of the economy.

And it’s going to make it much harder to replicate work going forward. And that’s unfortunate because I think a strength in economics has been the desire to replicate results.

Card: But I think it is absolutely critical for front-line research in the field to have access to some kind of data. Either you get access to administrative data through personal connections like a lot of people do. Or there are certain countries that make it available, like Germany, for instance—I’ve done a lot of work there—or Portugal. Or like Alan has done where he’s used some of the resources available at Princeton to do some specialized surveys and connect the responses with the administrative data. That’s probably the frontier at this point. But that’s not going to be a thing that a typical person can do very easily.

Krueger: And we haven’t caught up in terms of training students to collect original survey data. I’ve long thought we should have a course in economic methods—going back to the New England Journal of Medicine—and cover the topics that applied researchers really rely upon, but typically are forced to learn on their own. Index numbers, for example. Or ways of evaluating whether a questionnaire is measuring what you want it to measure. And survey design, sampling design and the effect of non-response bias on estimates.

These are topics that other social science fields often teach and we just take for granted that students know it. And there’s a lot of work that’s being done, especially in development economics, on implementing randomized experiments, which I think is a net positive. But there’s also a lot of noise being produced. And I think having more training in terms of data collection, survey design, experimental design, would be helpful for our field.

Zipperer: You mentioned randomized experiments. What are your views on the pluses and minuses of what seem to be a variety of different empirical approaches now common in economic research, such as randomized experiments, actually conducting an experiment? Or a quasi-experimental approach, compared to say, a more model-centric approach? Or even more recent kinds of data mining techniques that let the data tell us the research design?

Card: I would say, and I think Alan would probably agree with me, that at the end of the day, you probably want to have all those things if possible. And each of them has some strengths and some weaknesses.

The strength of a randomized controlled trial is the ability to say you’ve got this treatment and this control group and it’s random. So that means that you’re internally consistent. The weakness is that the set of questions you can ask and the context in which you can ask those questions is often very contrived.

So the one extreme is the lab experiment, where you’re getting a bunch of students and you’re asking them to pretend that they’re two sides of a bargaining table or something similar. And by changing the way you set the protocols for those experiments, as people that work in that field are aware, you can get somewhat different answers. To some extent, the criticisms of psychology that you would see played out in the newspapers recently has a lot to do with those difficulties. It’s not just how you read the script but how you set up the lab and everything else that kind of matters.

So the great advantage of a quasi-experiment or natural experimental like minimum wage is that it’s a real intervention. It’s real firms that are all affected. You get part of the general equilibrium effect. That’s pretty important for understanding the overall story. The disadvantage is that someone can always say, well, it isn’t truly random. And the number of units might be small. So you might only have two states. At some abstract level, there’s only two degrees of freedom there. And so that’s a problem.

And then there’s a third set of problems, which I’ve alluded to before, which is the types of questions that you can ask. And this is where my former colleague, Angus Deaton, is well-known for his vitriolic criticism of RCTs in development economics.

And I think one interpretation of his concern is the set of questions that can be asked are really so small, relative to the bigger questions in the field. Now that isn’t always the case but that is a concern.

Krueger: Yes, I would just add that no research design is going to be perfect. And you can poke holes in anything. And I think if you believe that existing research is great and we have answered so many questions and we were on the right track before, then one might be hostile towards the growth of randomized controlled trials. But that’s not how I view the earlier state of research.

In my mind, there are two great strengths of randomized experiments. One is that the treatment is exogenous by design. And the other is that it makes specification searching more constrained. It’s pretty clear what you’re going to do. You’re going to compare the treatment group and the control group.

I’ve seen cases where people muck around to generate a result from an experiment. For example, look at Paul Peterson’s work on school vouchers, where he finds no impact overall and kind of buries that, but looks at a restricted sample of African Americans in some cities and argues that we’ve got these great effects from school vouchers, which turn out not to hold up if you actually expand the sample. So I’m not saying that randomized experiments totally ties people’s hands. But I think they do so more than is the case with non-experimental methods applied to observational data.

I’ve become more eclectic over time regarding research method, as I mentioned at the event earlier today. I mean, I was struck when I worked in the White House at the range of questions I would get from the President. And you’d want to do the best job answering them. That was your job.

And there were some cases where there was very little evidence available and there was some modeling which, if you buy the assumptions of the modeling, could answer a lot of questions.

And I think that was probably better than the alternative, which is having a department come in and plead its case based on no evidence or model whatsoever.

So I encourage economists to use a variety of different research styles. What I think on the margin is more informative for economics is the type of quasi-experimental design that David and I emphasize in the book.

But the other thing I would say, which I think is underappreciated, is the great value of just simple measurement. Pure measurement. And many of the great advances in science, as well as in the social sciences, have come about because we got better telescopes or better microscopes, simply better measurement techniques.

In economics, the national income and product accounts is a good example. Collecting data on time use is another good example. And I think we underinvest in learning methods for collecting data—both survey data, administrative data, data that you can collect naturally through sensors and other means.

Card: Yeah. For instance, take the American administrative data that’s collected by the Social Security Administration. If you wanted to do something very simple to that dataset that would make it possible to do a lot more, you could ask each employer, who reports their employees’ Social Security earnings data to also report the spells that they worked — the starting and ending of the job.

That simple kind of information—which could be collected, maybe with some burden, but in many cases, almost trivially—would expand the use of that dataset amazingly, for just an amazing set of purposes.

It turns out, that’s what they do in other countries. So you can then take an administrative dataset like Social Security Administration and that suddenly becomes a spell-based dataset, because you’ve got every employment spell that somebody had during the year, automatically, for free.

It’s not perfect, but it’s just a quantum improvement. Unfortunately, though, we don’t have anybody saying, well, what could we do to make administrative datasets better and more useful for research?

There are people at the Census Bureau who are kind of working on matching administrative and non-administrative survey type datasets. But often times that’s way down in the subterranean levels, partially because of the concern that if people knew that you can actually take the Numident [Numerical Identification System] file and attach a Social Security number to every piece of paper going through, that they would be shocked somehow. So we have quite a problem here.

Zipperer: So, to take another concrete case where measurement seems to be particularly important and related to work that you’ve done on minimum wages, what kind of wage spillover effects do minimum wages generate for people who are, say, earning above a new minimum wage after a minimum wage increase?

There’s a lot of work showing that there are spillover effects and there are questions about how big they are, perhaps due to a measurement error in wages and survey data. What are your views about why these spillover effects seem to exist?

Krueger: Let me make some initial comments. In our book, we discovered spillover effects. When I say we discovered it, we asked in a very direct way when the minimum wage went from $3.35 to $4.25, and you had a worker who was making $4.50, did that worker get a raise as a result?

And what we found was that a large share of fast food restaurants responded “yes.” We had these knock-on effects or spillover effects.

Interestingly, they tended to occur within firms that were paying below the new minimum wage. You had some restaurants that were already above the new minimum wage. And the increase in the minimum wage had very little effect on their wage scales, which suggests that internal hierarchies matter for worker morale and productivity.

Only to economists is that surprising. The rest of the world knows that the way that they’re treated compared to other people influences their behavior, and the way that they view their job and how likely they are to continue on their job, and so on.

The standard textbook model, by contrast, views workers as atomistic. They just look at their own situation, their own self-interest, so whether someone else gets paid more or less than them doesn’t matter. The real world actually has to take into account these social comparisons and social considerations. And the field of behavioral economics recognizes this feature of human behavior and tries to model it. That thrust was going on, kind of parallel to our work, I’d say.

Now, I also found it interesting that when the minimum wage was at a higher level compared to a lower level, the spillover effects were less common.

So to some extent, the spillover effects are voluntary and the companies are willing to put up with somewhat lower morale when the minimum wage is at a relatively higher level. And I always found it curious that companies would complain, “It’s not the minimum wage itself, it’s that I’m going to have to pay more than everybody else.” Well, that shows that you’re actually not behaving the way the model that you just cited to argue that you are going to hire fewer workers says you should behave. Because you’re voluntarily choosing to pay people, who were working before at a lower wage, a higher wage.

And it also gets you to think, well, maybe the wage from a societal perspective was too low to start with. And the fact that employers are taking into account these spillover effects when they set the starting wage means that from a societal perspective, we could get stuck in an equilibrium where the wage is too low.

Now, I always suspected that the spillover effects kind of petered out when you got 50 cents or a dollar an hour above the new minimum wage. But interestingly, work by David Lee, who was a student of David’s and mine at Princeton, suggests that the spillover effects are pretty pervasive throughout the distribution. And he used a different method, one that I think is quite compelling to look at: What happened around minimum wage increases in states where they really had more of a binding effect?

And he found quite significant spillover effects. So one area where I think the literature has deviated from what we concluded in our book was we thought the spillover effects were there but they were modest. And I would say, if anything, it points to a larger impact of the minimum wage because of the spillovers.

Card: Thinking about why these occur—Laura Giuliano, who attended the conference today, has a very interesting new paper studying a large retailer that has establishments all across the country, where wages were set at the company level.

And the paper shows that employees who were above the minimum wage, but in stores where different fractions of the employees below them got bigger and smaller raises, have differential quit behavior. So it’s really strong direct evidence of this channel that everyone has always thought is probably true.

I think that our understanding of exactly all the forces that determine the equilibrium wage distribution is pretty limited, to tell you the truth.

In the United States, for example, it’s very, very difficult to get an administrative dataset that would say: Here’s everybody that works together at the firm. And let’s treat that, as Alan was saying, as part of the social group. What things do they share? What features of their outcomes seem to be mediated through the fact that they all work for the same employer?

And in the Scandinavian countries, there’s quite a bit of work that’s going in that direction. One really simple example is if a male boss at a firm takes leave when his wife has a baby, then the other employees do too. So that’s just a really simple example of the kind of work that you could do if you had the ability to match these datasets together and show they were all the firm.

I think outside of economics, in sociology for instance, they’ve always thought that a very important part of everyone’s identity is the firm they work for and who they work with.

And it has to be really influential in how you think about your life and how you organize your time and people you hang out with and so on. But in a standard economics model, that’s all thrown out the window. And for some questions, it might be second-order at best. But for other questions, it seems like it’s first-order.

Zipperer: Do you see that changing somewhat with, for example, your and others’ work on the nature of the firm influencing inequality?

Card: Well, I’m always hopeful. (Laughter.)

Krueger: Yes, I would say the success of behavioral economics is a major development in economics.

Card: And in labor economics especially, I’d say.

There is an interesting thing going on in economics. So, we see job market candidates that come through every year. And there’s sort of two sides of economics in their work simultaneously.

One side is uber-technical. More and more technical stuff every year. You cannot believe the complicated ideas that people are trying to pretend that individuals are working with and choosing whether to do this or that.

And on the other side, behavioral economics is almost a reaction to that. It says, “Actually, those effects are all third-order. The first-order thing is the concern is about how you rank relative to your peers.”

So the great advantage of behavioral economics is that it is saying, “OK. I’m going to try and simplify away from this incredibly complicated thing where your choice about whether to participate in a welfare program is influencing how you’re going to divide up the surplus between you and your husband and whether you’re going to be divorced next year.”

I saw a paper like this last week and I honestly thought, “If I could think this through myself, it would be a miracle.” (Laughter.) I spent my life thinking about that.

Krueger: And you oversimplified it: You’re considering each step in the way, assuming you will make optimal choices each year in the future, and then integrating back to figure out what to do today.

Card: So there are these two strands of economics that are really fighting it out right now in the theory side. And in a way, behavioral economics is much more closely linked to what I think someone earlier today was calling institutional economics. So it’s the idea that people are doing a set of things, maybe rules of thumb and so on, that are influencing how they choose what they do. That maybe we would gain a lot from understanding those things a little bit better.

Zipperer: At the beginning of this discussion, a lot of arrows seemed to point back to Orley Ashenfelter. Could you talk about his influence on your work and maybe the field generally?

Card: Well, for me it’s very strong because he was my thesis adviser and really the reason why I went to Princeton as a grad student. And even as an undergraduate, the two professors who I took courses from that had the most influence on me were students of Orley’s.

So my connection to him goes back a long time. And we wrote a bunch of papers together over the years and advised many students. But also many of the people of my generation of labor economists, like Joe Altonji, John Abowd, or other people like that, were strongly influenced by Orley.

Right from the get-go, he was a very, very strong proponent of “experiments if you can do them” and “collect your own data if you can do it” and “spend the money if you can.” One time, he and Alan went to Twinsburg Twins Festival and collected data on twins.

Krueger: One time? Four summers in a row we went to Twinsburg, Ohio, with a group of students. We brought a dozen students. (Laughter.)

And it was actually classic Orley because he spent a lot of time choosing the restaurant for dinner, a lot of time chatting with some people, and not too much time collecting data, as I recall.

I read Orley’s work when I was an undergraduate. And a big part of the attraction for me to come to Princeton was Orley, and then David was just really a bonus who I ended up working with so closely for a decade.

And I think Orley kind of set the tone for the Industrial Relations Section. He had done work on the minimum wage with Bob Smith at Cornell, on non-compliance and how much non-compliance there was—which made us think that, if you really want to look for the effects on minimum wage, you need to look in places where it’s binding and companies are complying.

He had a healthy dose of skepticism about the research that had come from the National Minimum Wage Study Commission. Which sometimes he called, as I recall, the National Minimum Study Commission.

Card: Minimum Study Wage Commission.

Krueger: The Minimum Study Wage Commission. (Laughter.)

Card: You can quote me on that.

Krueger: We’re just quoting him. (Laughter.) And he used to like to tell a story, which I remember vividly, where he met with some restaurant group when he worked, I think, at the Labor Department. And they said, we’ve got a problem in our industry: The minimum wage is too low and we can’t get enough workers.

And that’s inconsistent with the kind of view that the market determines the wage, and you get all the workers you want at the going wage, and you can raise the wage if you can’t get enough workers. And I think he was always sympathetic to the famous quote, in “A Wealth of Nations,” where Adam Smith said that employers rarely get together when the subject doesn’t turn to how to keep wages low; that there’s a tacit and constant collusion by employers. So I think he kind of set a tone where it was acceptable if you found results that went against the conventional wisdom.

And I came from an environment where even Richard Freeman at the time, who was a somewhat heterodox economist, had written that there’s a downward sloping demand curve for low-wage workers and a higher minimum wage reduces employment, but not all that much, but you get the conventional effects. So that was my background coming in.