Ways to Sponsor EDIWM

Economics is Everywhere

Classics From The Vault

Econgirl Says

I got an email from Steve Landsburg with the subject line "krugman, me and you." I can't decide whether that counts as the sort of threesome I've always dreamt about...

I get daily emails from The Chronicle of Higher Education newsletter. Today's headline: "Academe Today: Professor Says His University Cares Little About Teaching." I had to stop for a second and confirm that I wasn't in fact reading The Onion.

But It’s All In The Name Of Science, Homelessness Edition…

I want you to think back for a moment to those science projects you did when you were a kid…you know, you had the question that you wanted to answer, the control and the experimental group, etc. etc. Do you happen to remember what the point of all of that was? The general idea is that there is only one difference between the control and the experimental group, so if you see a difference in outcome you can pretty safely conclude that it’s because of that one difference between the groups. For example, elementary-school econgirl did a project investigating what shape of paper airplane flies the furthest. It would have been scientifically invalid for me to make the airplanes out of different types of paper or to fly them in different places or have different people throw them or whatever, so I was very careful to keep everything constant except for airplane shape.

The scientific method is not relevant only to little kids and science fairs, however, and it is used extensively in the “real” sciences. (By “real” I mean, physics, chemistry, etc.) Social sciences (read, economics), on the other hand, rely more on observational data and natural experiments. Given the above implication that controlled experiments are superior in terms of scientific discovery, why don’t economists, sociologists, etc. use them more often?

In the name of getting good statistical data, New York City is randomly denying poor people access to a program designed to stave off homelessness. If those unlucky people become homeless, you know it works! Oh… is that frowned upon?

The program in question is called Homebase. It gives rental assistance, job training, and other benefits to people facing “immediate housing problems that could result in becoming homeless.” People in imminent danger of being on the street. The poorest of the poor, next to actual homeless people.

To find out whether or not Homebase really works, the city is conducting a study. As the NYT puts it, “Half of the test subjects – people who are behind on rent and in danger of being evicted – are being denied assistance from the program for two years, with researchers tracking them to see if they end up homeless.”

(The original NYT article is here, but I think that the Gawker commentary gives some insight into popular opinion on the matter.)

Here’s the thing about the world- resources are limited (yes, even resources for warm fuzzy things like helping almost-homeless people), and society is better off if those resources are put towards things that actually work and not wasted on things that don’t. I think that most people would agree with that, but then they get pretty squeamish when researchers actually try to figure out what works and what doesn’t. My suspicion is that not everyone remembers their elementary-school science projects and thus doesn’t understand why the researchers can’t just help everyone and see what happens. Luckily, those in charge of this experiment seem to be familiar with the concept of selection bias:

But Seth Diamond, commissioner of the Homeless Services Department, said that just because 90 percent of the families helped by Homebase stayed out of shelters did not mean it was Homebase that kept families in their homes. People who sought out Homebase might be resourceful to begin with, he said, and adept at patching together various means of housing help.

In other words, yes, 90 percent of people who got help stayed off of the streets, but it’s entirely possible that they would’ve figured something else out had they not gotten this particular form of assistance. Therefore, the researchers can’t conclude based on this evidence that the program actually had the effect that it was going for. (See here for more on correlation versus causation.)

The people involved with this project seem to be trying to do a good thing but are getting caught in a bit of an unfair PR nightmare. Granted, there would likely be less grumbling if this were a new program and only some people were being offered it, whereas the reality is that it’s an existing program that is being randomly taken away from some people. Entitlement aside, the distinction isn’t really important, but it’s not always easy to get people to see that. It’s also relevant to note that the program has a limited budget, and its spokespeople specifically point out that not everyone who applies gets assistance anyway. If those objecting realized that the same number of people were getting assistance, but those people were getting it via a lottery outcome rather than because they applied first, would the objections be as loud? See, there are lots of things that I am curious about.

Most perplexing to me is the fact that no one seems to have their panties all in a twist about clinical drug trials, when they employ exactly the same methodologies as the study described above. I mean, how pissed off would you be if you were the cancer patient who got the placebo for the new treatment that turned out to work great for those people who actually got it? That said, I’m not holding my breath in hopes of seeing a New York Times feature on the issue.

In case you’re curious, economists are getting onto the “hey, let’s see if all this stuff we’re doing in developing countries actually works” boat and using data to design more effective and efficient humanitarian programs. For example, a classmate of mine is involved with TamTam, an organization that distributes malaria nets to needy women in Africa. Did you know that a $7 malaria net can reduce the risk of childhood mortality by 20 percent? Did you know that women can be encouraged to go to prenatal checkups if the distribution of the nets is paired with such visits? The people at TamTam know these things because they’ve gathered data in such a way that they can tease out cause and effect.

I don’t know about you, but I would rather support organizations that take specific care to make sure that the projects they undertake have the biggest possible impact. Unfortunately, it’s often impossible to tell what will have the biggest impact if we always give everyone what they think they want. Science is a bitch sometimes.

8 responses so far ↓

There’s a big difference with drug trials: people participate voluntarily and in some way benefit from the research (e.g. by getting the treatment once it has been approved). Additionally, studies generally test a new drug against the best currently available drug and see if it’s more effective. The NYT actually did have an article a couple months ago about a promising cancer treatment that is radically different from its predecessor and the ethical concerns from doing a study testing it against this treatment: http://www.nytimes.com/2010/09/19/health/research/19trial.html

What do the people who are denied benefits get from the study? Will the state reimburse them for the “cost” of their participation in the control group if they end up homeless? Of course not. All they get is the stress from participating in a lottery where at best they are off no better than before.

Surely there must have been data available from before Homebase was implemented. Similarly, comparisons between different cities with and without similar programs could be used. Yes, there are weaknesses with both approaches, but there are limitations to any study.

In this particular study, consider the implications: there are (apparently) no changes to how many people rely on outside assistance. So these non-profits may be able to assist a certain number of people – if they are good at it, the Homebase program may be found to be no better than what those in the control group achieve and get scrapped. But that would lead to many more people relying on non-profits, which couldn’t handle the huge increase in demand. Then again, the current recession may also be a bad baseline – is it really representative of a “normal” year?

I imagine a significant reason people are upset: this is a program targeting some of the city’s poorest – a $20m footnote on a $65bn budget (expected to grow to $75bn by FY’14). The benefits to the city (i.e. taxpayers) is negligible, whereas the cost to those affected is significant. Sounds to me like this is a terrible trade-off.

So that’s an interesting comparison with medical drug trials. I’ve seen mention a few times how after a particular drug was found to be working way way better than the placebo, the trial was halted and everyone was given the drug.

Bearing in mind the limited funds available for this program (and accepting as David says that this is a major reason why people are upset) it seems unlikely that a similar outcome would be achieved in the even that Homebase was found to be particularly effective. So the perceived ‘good” which would come out of this trial would only be if:
a. The statistics prove that the program is effective (I think this makes sense – why fund a useless program) AND
b. The people who write implement the policies actually pay attention to the statistical evidence.

I can appreciate the need to figure out if homebase actually works, but randomly denying people is pretty messed up.
Clinical Drug Trials are different in an important way. A person signs up to be in a clinical drug trial and they are notified that they might be given the placebo.
Is there a way to notify people that they might be randomly denied help?

The contrary would have been better. If they can’t serve all the eligible people they should do a lottery of applicants that come in. So that they are randomly SELECTING, not randomly denying.

My favorite part of the NYT article is that it argues against itself. They bring up an example of someone for whom the program was a last ditch attempt to get help but who was denied help from Homebase. So what happened? Well, at the end of the article you find out that it wasn’t truly the last resort: they managed to find housing elsewhere anyway! So much for outrage….

I agree that this is a PR nightmare and could have been handled better. I don’t agree that assistance was randomly taken away from some unless there was enough money available to fund all applicants. If there is not enough money for all applicants, the ones who get the money or not must be selected in some way. If the organization had said that they didn’t have enough money for everyone and so awardees would be determined randomly, people might have perceived this as fair. Then the study could be done to investigate the consequences.
This is pretty much what was analyzed by Caroline Hoxby when she looked at the effect of charter schools in New York City (NBER working paper w14852). New York City has too many poor children who want to go to the Charter Schools than there are slots. Thus, they distribute slots by lottery. Hoxby and Murarka compared the students who got in to those who didn’t to determine the effect of the Charter School.
I think the distinction is that in the schools case the study was done because the lottery created an experiment whereas in the homeless case, the experimental design demanded a lottery. In both cases, a lottery allocated a scarce resource and a study was done to evaluate the effectiveness of a program.

There is only one way to really help a homeless person and that is to give them a temporary home. It would also help if there was a way to make money doing it so that those with money would invest in solving this problem. GivemShelter org has the beginnings of a real solution.