Regression statistics

R squared

Noah Smith has an article proposing some kind quasi-mandatory national service. The end-goal is not, say, winning WWII, but rather the social cohesion side-effect gained from making young people from different backgrounds work together. For many reasons, I think this is a bad idea, but perhaps the most important is that for it to “work”—to really forge some kind of deep band-of-brothers connection, you’d have to impose terrible costs on the participants.

The reason military service is described as “service” or a “sacrifice” is that is, even in peace time. You risk death and terrible injuries, both mental and physical. You lose a great deal of personal freedom and gain a great deal of worry and anxiety. You risk seeing your friends and people you are responsible for killed and maimed. You spend months and even years away from loved ones. I spent 5 years in the Army as a tank platoon leader & company executive officer, after 4 years at West Point. Of my active duty time, 15 months were spent in Iraq (Baghdad and Karbala). It was, without a doubt, the worst experience of my life—nothing else even comes close, and I got off easy.

One might say, well, this is just the “war” version of military service. Not really. Outside of combat, back in Germany: one soldier in my battalion (slowly) drowned when his tank got stuck in deep mud during a training exercise and the driver’s compartment filled with water; another in my brigade was electrocuted when loading tanks onto rail cars; another young soldier from my brigade was, two weeks after arriving in Germany, promptly robbed & beaten to death by two other privates from his battalion. With our deployment looming, one lieutenant in our brigade went AWOL and later killed himself. And I’m not considering the numerous injuries. This was never summer camp.

When you peel back the superficially appealing aspects of military service—focus on teamwork, training, college benefits, supposed egalitarian design etc., you’re confronted with the fact that militaries are impersonal bureaucracies that (1) treat soldiers as a means to an end, and (2) are designed to efficiency kill people and destroy things. Both features are necessary , but that does not make them less evil. Participating in those two functions, no matter how just the cause, is mental damaging for many, and deeply unpleasant for almost everyone.

So that’s all cost. Does military service “work” to build cohesion? I would give a qualified “yes,” but I don’t think it’s a generalized social cohesion Smith is after anyway—I don’t feel some deep attachment to the white working class, though I am more familiar with that culture than I otherwise would be. I’m sure I know more Trump supporters than the average (any?) NYU professor, but I don’t think I’m any more sympathetic. I have a bond to soldiers from my *platoon* and a deep friendship with some of my fellow officers, but here’s the rub—it’s based on the shared sacrifice. If we had just spent our time together fixing up trails or building playgrounds, those fellow soldiers would be something I already have lots of—former work colleagues.

To wrap it up, society doesn’t get the cohesion without the costly sacrifice, and creating that sacrifice artificially would be deeply wrong. And if the goal of mandatory service is just to get people to meet people from other backgrounds—say the kind of band-of-brothers level cohesion isn’t needed—surely there are cheaper, less coercive ways to do it.

One sociological critique of economics is that unlike the physical sciences, economic research can affect the thing it studies. I might not be using the jargon the correct way, but the basic idea is that economics is “performative“—it’s not just a magnifying glass—it’s a magnifying glass that sometimes focuses the light and burns what you’re looking at. I have an example of this from my own work that bugs me more than a little bit, but is ultimately, my own fault. Let me explain.

So back in graduate school, Lydia Chilton and I wrote a paper called “The Labor Economics of Paid Crowdsourcing” (data & code here). In a nutshell, we introduced the labor economics way of thinking about labor supply to the crowdsourcing/computer science crowd. We also did some experiments where we varied earnings to see how workers reacted on MTurk. We thought the really cool “so what” of the paper was that we presented strong evidence for target earning—that workers had a preference for earning amounts of money evenly divisible by 5 (here’s the key figure–note that taller black histogram bars):

Almost as an afterthought, we estimated the distribution of worker reservation wages for our (*very* unusual) task. We came up with a median value of $1.38/hour, using some strong assumptions about functional form. We put this in the abstract & we even discussed how it could be used to predict how many people would accept a task, because every paper has to make some claim to how it is useful.

Anyway, every once in a while, I see something on twitter like this (7 *years* later):

Hmmm, I wonder where that $1.38/hour figure came from. Anyway, mea culpa. If you’re a MTurk worker, my apologies. Feel free to cite this blog post as the authority that $1.38/hour is a silly number that shouldn’t anchor wages.

I unexpectedly got into two Twitter discussions recently about Silicon Valley (SV) and its effects on the local real estate market. I felt constrained by the 140 character limit, so I thought I’d write a blog post explaining my thinking (and add supply & demand diagrams!).

To understand what is happening in SV, we need to think about three markets:
(1) the product market for what SV tech companies sell
(2) the SV tech labor market and
(3) the SV housing market.

First, what’s obvious: there’s been a huge increase in demand for what Silicon Valley sells: the world is using way more IT than it used to. Someone has to build & program this stuff, and so there’s been a large increase in demand for certain kinds of high-skilled labor—namely software engineers, designers, product managers and so on. Let’s call them “tech people.”

Most tech people are transplants, coming to SV specifically to work in tech. They need a place to live. As such, a demand shock for tech labor is also a demand shock for housing in SV.

How the labor demand shock plays out

In the figure below, the top diagram is the labor market and the bottom diagram is the housing market. The y-axes are wages and real estate prices, respectively. The x-axes are tech people hired and units of housing consumed, respectively. The connection between these two markets is so tight that I assume that changes in tech people employed must be met one for one with changes in housing units consumed. This is why the two diagrams are stacked on top of each other.

Pre-boom equilibrium:

Here comes the iPhone: Tech Boom!

Let’s consider how a product market demand shock leads to a new equilibrium. First, the demand curve for labor shifts out (in red, top panel). If we ignored the housing market, we would just see higher wages and more tech people hired. However, these new tech hires want a place to live, so they shift out the demand curve in the housing market (in bottom panel, also in red).

But the tech people labor supply curve depends on housing costs

At this new higher price for housing, fewer tech people are willing to work at each wage (i.e., “I’ll stay in Seattle and work for Jeff Bezos, spending more on tissues and psychological counseling, but spending less on rent”). The higher housing prices shifts in the tech people labor supply curve. This shift takes some pressure off the housing demand, pushing down housing prices a little. This tatonment goes back and forth until a new equilibrium is reached with:

(1) more tech employees (but not as many as would be in absence of housing effects)
(2) higher wages and
(3) higher real estate prices

Where we are now:

The importance of the housing supply elasticity

As you might expect, how this process works out depends a great deal upon how these curves are shaped and how big these shocks are. One critical piece is the slope of that one curve that didn’t move around—the housing supply curve. From the perspective of of tech and non-tech workers and tech firms, we can say “elastic” = “good” and “inelastic” = “bad” (existing, homeowners are another story).

Elastic supply = good. Imagine a better world in which the housing supply is completely elastic. The housing supply curve is flat. This means that no matter how large the positive demand shock in the housing market, house prices stay the same. Here, the demand shock in the tech labor market has no effect on non-tech workers through the housing channel (because housing prices do not rise). Also note that there is no pulling in of the tech worker supply curve—the workers get the “full” benefit in higher wages.

Inelastic supply = bad. Now, let us imagine a world where housing is completely inelastic, making housing a vertical line. In this inelastic case, housing is fixed. We already “know” that tech companies aren’t going to be able to hire more. Tech wages are going to rise, but the main beneficiaries will existing owners of housing because of the price increase. They get enormous rents—literally. Of course, the curve is not completely inelastic because of one very controversial “source” of elasticity is displacement. The tech people move in, the non-tech person moves out. This is why people throw yogurt (at best) at tech buses.

How do non-tech people fare?

A more complete analysis might consider the effect of the tech boom on non-tech wages. Presumably they get some benefit from increase demand on their services from tech people. And to some extent, non-tech sectors have to increase wages to get people to still live and work in SV. It seems unlikely to me that this is fully off-setting.

The main adjustment is probably housing displacement, meaning longer commutes. It makes more economic sense for them to move farther away (i.e., travel an hour a day and save $20/day on rent). That being said, they are almost certainly worse off with these horrendously long commutes than they were pre-boom.

What are the solutions?

Do nothing. One “solution” is do nothing, under the belief that things will run their course and the tech boom will fizzle. To the extent that the boom does not subside, other places in the world will become relatively more appealing for tech as the high cost of labor in SV persists (because of housing). However, to date, SV seems to be becoming more important and tech becoming more centralized in SV, not less, so this might be a slow-acting solution. Further, it seems bad for SV as a region: If I were king of SV, I wouldn’t be sanguine about the “Detroit solution” to too much product market demand for what my region specializes in.

Build more housing. Another solution (of course) is to increase the housing stock. This should push prices down. A better solution might be to enact structural changes to make the supply of housing more elastic. Given how much housing prices have risen, it seems that the supply is very inelastic (more on this later).

Let people work remotely. Another solution is radically different, which is to try to sever or at least attenuate the connection between the housing and labor markets. This is the “Upwork” solution in a nutshell, which their CEO outlined in a recent Medium post. If a tech company is open to remote hiring, then those remote hires never enter the local rental market and do not drive up prices. It does not even have to be either/or, as letting your employees work remotely some of the time helps: if I only have to be in San Francisco three days a week, living in Half Moon Bay rather than Cole Valley becomes much more attractive (Uber also helps here and autonomous vehicles would help a lot).

I’m particularly optimistic about, (3), the tech-focused solution, as it seems more likely “work” right away and it requires little political change. Also, somewhat ironically, the increasing maturation of technology for remote collaboration means that this approach should become more attractive over time.

Incidentally, why is the supply of housing in SV so inelastic?
Some of it is surely geography, about which little can be done. The peninsula is just not that wide and there aren’t large, nearby tracts of undeveloped land. I imagine that the, uh, interesting geological properties of the area matter for construction. However, the main cause seems not to be so much the quantity of land, but rather the intensity with which the land is used.

Take a Google streetview of walk of Palo Alto or Melo Park. When you consider how large the demand for housing is and then look at the built environment of those cities, there is a wild disconnect. These cities should be Manhattan-dense or at least Cambridge, MA-dense but they are not—it is mostly single family homes, some on quite large lots. They could be nice suburbs more or less anywhere. These are *very* nice place to live, of course, and I can understand the instinct to preserve them as they are. But the unchanging neighborhood character of Palo Alto is part of the reason why tech is having a huge negative externality on non-tech people, through the channel of higher housing costs.

Relevant disclosures: I used to work at Upwork’s predecessor company, oDesk. I still consult with them and I conducted academic research with their data. I also visited Uber as a visiting economist last summer and my wife works for them still. When I worked for Upwork, I lived in Redwood City until my landlord decided to not renew our lease so he could sell the place. We rented somewhere else that was a little cheaper, but my commute got longer. I might go back to SF to work for a bit this summer, if I can find a cheap enough place on Airbnb.

In a recent NYTimes article about Uber drivers organizing in response to fare cuts, there was a description of the rating system and how it affects drivers:

They [drivers] are also constrained by the all-important rating system — maintain an average of around 4.6 out of 5 stars from customers in many cities or risk being deactivated — to behave a certain way, like not marketing other businesses to passengers.

Using “marketing a side business” as an example of behavior the reputation system curtails is like saying “the police prevent many crimes, like selling counterfeit maple syrup“—technically true, but it gives the wrong impression about what’s typical.

Bad experiences on ride-sharing apps presumably mirrors bad experiences in taxis: drivers having a dirty car, talking while driving, being rude, driving dangerously or inefficiently and so on. I’d wager that “marketing a side business” complaints more or less never happen. If they do happen, it’s probably because the driver was particularly aggressive or annoying about promoting their business (or the passenger was idiosyncratically touchy). It certainly doesn’t seem to be against Uber’s policy—an Uber spokesperson said recently that Uber not only condones it, but encourages it.

Being subject to a reputation system is certainly personally costly to drivers—who likes being accountable?—but it’s not clear to me that even drivers as a whole should dislike them, so long as they apply to every driver. Bad experiences from things like poor driving or unclean vehicles are not just costly to passengers, but are also costly to other drivers, as they reduce the total demand for transportation services (NB: Chris Nosko & Steve Tadelis have a really nice paper quantifying the effects of these negative spillovers on other sellers, in the context of eBay). The problem with quality in the taxi industry historically is that competition doesn’t “work” to fix quality problems.

Competition can’t solve quality problems because a passenger only learns someone was bad after already having the bad experience. Because of the way taxi hails work, passengers can’t meaningfully harm the driver by taking their business elsewhere in the future, like they could with a bad experience at a restaurant. As such, the bad apple drivers don’t have incentives up front to be good or to improve. (The same also goes for the other problem of bad passengers, which there are and the reputation system helps deal with.) Reputation systems—while far from perfect—solve this problem.

While reputation systems seem like something only a computer-mediated platform like Uber and Lyft can have, there’s no reason (other than cost) why regulated taxis couldn’t also start having reputation systems. Taxis could ask for passenger feedback in the car using the touch screen, and then use some of the advertising real estate outside the car to show average driver feedback scores to would-be passengers. This would probably be more socially useful than the usual NYC advertisements on top of yellow cabs, such as for gentleman’s clubs, e-cigarettes, and yellow cabs.

Disclosure: I worked with their data science team in the summer of 2015. However, the direction of causality is that I wanted to work with Uber because they are amazing; I don’t think Uber is amazing because I worked for them.

Sign-up to receive my research updates by email:

Often, I’m in a seminar or reading a paper and I want to quickly see if the the difference in two means is likely to be due to chance or not. This comparison requires computing the standard error of the difference in means, which is , where is the standard error of the first mean and is the standard error of the second mean. (Let’s call the difference in means .)

Squaring and taking square roots in your head (or on paper for that matter) is a hassle, but if the two standard errors are about the same, we can approximate this as , which is a particularly useful approximation. The reason is that the 95% CI for is (i.e., 6 of our “original” standard errors). As such, we can construct the 95% CI for the difference Greek-geometer style, by taking the origin CI, diving it into 4ths and then adding one more SE to each end.

The figure below illustrates the idea – we’re comparing A & B and so we construct a confidence interval for the difference between them, that is 6 SE’s in height. And we can easily see if that CI includes the top of B.

What if the SE’s are different?

Often the means we compare don’t have the same standard error, and so the above approximation would be poor. However, so long as the standard errors are not so different, we can compute a better approximation without any squaring or taking square roots. One approximation for the true standard error that’s fairly easy to remember is:

.

This is just the Taylor series approximation of the correct formula about (and using and ).

Chris Blattman recently lamented reviewers asking him to cluster standard errors for a true experiment, which he viewed as incorrect, but had no citation to support his claim. It seems intuitive to me that Chris is right (and everyone commenting on his blog post agreed), but no one could point to something definitive.

I asked on Twitter whether a blog post with some simulations might help placate reviewers and he replied “beggars can’t be choosers”—and so here it is. My full code is on github.

To keep things simple, suppose we have a collection of individuals that are nested in groups, indexed by . For some outcome of interest , there’s a individual-specific effect, and a group-specific effect, . This outcome also depends on whether a binary treatment has been applied (status indicated by ), which has an effect size of .

We are interested in estimating and correctly reporting the uncertainty in that estimate.

First, we need create a data set with a nested structure. The R code below does this, with a few things hard-wired: the and are both drawn from a standard normal and the probability of treatment assignment is 1/2. Note that the function takes a boolean parameter randomize.by.group that lets us randomize by group instead of by individual. We can specify the sample size, the number of groups and the size of the treatment effect.

This function returns a data frame that we can analyze. Here’s an example of the output. Note that for two individuals with the same group assignment, the term is the same, but that the treatment varies within groups.

1

2

3

4

5

6

7

8

9

10

11

12

>CreateClusteredData(2,10,1,randomize.by.group=FALSE)

yindividual group trt epsilon eta

1-0.378039231200.7122716-1.0903109

2-1.26061541221-1.1703046-1.0903109

30.42009564310-0.16189960.5819952

4-0.061429894201.0288810-1.0903109

50.339554845210.4298657-1.0903109

6-2.21558690620-1.1252760-1.0903109

7-0.037487547201.0528233-1.0903109

80.27829270810-0.30370250.5819952

9-0.76669849921-0.6763876-1.0903109

101.6049024910101.02290730.5819952

Now we need a function that simulates us running an experiment and analyzing the data using a simple linear regression of the outcome on the treatment indicator. This function below returns the estimate, and the standard error, from one “run” of an experiment:

The standard error also has a sampling distribution but let’s just take the median value from all our simulations:

R

1

2

3

># median value of standard errors

>(median.se<-df.sim.results%$%se%>%median)

[1]0.08785142

If we compare this to the standard deviation of our collection of point estimates, we see the two values are nearly identical (which is good news):

R

1

2

3

># standard deviation of the estimated betas

>(se<-df.sim.results%$%beta.hat%>%sd)

[1]0.08791535

If we plot the empirical sampling distribution of and label the 2.5% and 97.5% percentiles as well as the 95% CI (constructed using that median standard error) around the true , the two intervals are right on top of each other:

Main takeaway: Despite the group structure, the plain vanilla OLS run with data from a true experiment returns the correct standard errors (at least for the parameters I’ve chosen for this particular simulation).

What if we randomize at the group level but don’t account for this group structure?

At the end of his blog post, Chris adds another cluster-related complaint:

Reviewing papers that randomize at the village or higher level and do not account for this through clustering or some other method. This too is wrong, wrong, wrong, and I see it happen all the time, especially political science and public health.

Let’s redo the analysis but change the level of randomization to group and see what happens if we ignore this level of randomization change. As before, we simulate and then compare the median standard error we observed from our simulations to the standard deviation of the sampling distribution of our estimated treatment effect:

The OLS standard errors are (way) too small—the median value from OLS is still about 0.08 (as expected) but the sampling distribution of the estimated treatment effect is 0.45. The resultant CIs looks like this:

Eek. Here are two R-specific fixes, both of which seem to work fine. First, we can use a random effects model (from the lme4 package):

One closing thought, a non-econometric argument why clustering can’t be necessary for a true experiment with randomization at the individual level: for *any* experiment, presumably there is some latent (i.e., unobserved to the researcher) grouping of the data such that the errors within that group are correlated with each other. As such, we could never use our standard tools for analyzing experiments to get the right standard errors if taking this latent grouping into account was necessary.

Suppose you run a website and you have some experience or feature that you think might be good for some subset of your users (but ineffective, at best, for others). You might try to (1) identify who would benefit based on observed characteristics then (2) alter the experience only for a targeted subset users expected to benefit.

To make things concrete, in some cities, Uber offers “UberFamily” which means the Uber comes with a car seat. For us (I have two kids), UberFamily is awesome, but the option takes up valuable screen real estate and for a user that Uber thinks does not have kids, adding it to the app screen is a waste. So Uber would like to both (a) figure out if it is likely that I have kids and then (b) adjust the experience based on that model. But they’d also like to know if it’s worth it in general to offer this service even among those they think could use it. This isn’t the example that motivated this blog post, but it makes the scenario clear.

If you are testing features of this sort, then you want to both (a) assess your targeting and (b) assess the feature itself. How should you proceed? I’m sure there’s probably some enormous literature on this question (there’s a literature on everything), but I figure by offering my thoughts and potentially being wrong on the Internet, I can be usefully corrected.

I think what you want to do is not test your targeting experimentally but rather role out the feature for everyone you reasonably can than evaluate your targeting algorithms on your experimental data. So, you would run the experiment with a design that maximizes power to detect treatment effects (e.g., 50 to treatment, 50 control). In other words, completely ignore your targeting algorithm recommendations.

Then, after the experimental data comes in, look for heterogeneous treatment effects conditioned on the predictive model score, where the score can be thought of as a measure of how much we think a person should have benefitted from the treatment. The simplest thing you could do to would be to normalize all scores (so the scores have the same mean and variance across algorithms, making model coefficients directly interpretable across algorithms). Then just run the regression:

Hopefully, if the treatment was better for people the model thought would be helped, then should be positive (assuming the y is such that bigger is better).

You’d also want to finding the minimum score such that you should be targeting people i.e., the score such that the expected benefit from targeting is first positive. You can then simply select the algorithm with the greatest expected improvement, given the minimum score for targeting.

This seems like a reasonable approach (and maybe bordering on obvious but it wasn’t obvious to me at first). Any other suggestions?

When I write a sentence, there’s about a 10% chance it will have typo or grammatical error of some kind. It’s often painful to find them later, as like most people, I tend to “fill in the gaps” or glide over typos when reading my own writing. Fortunately, this kind of editing, unlike, say, reading for structure or consistency, is very parallelizable. In fact, reading each sentence alone, out of order, might even be better than reading the whole document, sentence by sentence.

As an experiment, I wrote a little script that splits a document up into sentences, with one sentence per line (the script is here). With this CSV, I can use Mechanical Turk to create HITs, with one HIT per sentence. The instructions for workers to label each sentence as “OK” or “Not OK” with an optional field to explain their reasoning. The Mturk interface looks like this:

After splitting the sentences, I went through the CSV file to remove blank lines and LaTeX commands by hand, though one could easily add this feature to the script.

I posted the HITs on MTurk this morning, paying 2 cents, with 4 HITS per sentence (so each sentence will be checked 4 times by different workers). The text was a paper I’m working on. Results starting coming in remarkably quickly—here it as after 30 minutes:

I’m not thrilled with the hourly rate (I try to shoot for $5/hour) but this average is always very sensitive to workers who take a long time. So far, the comments are very helpful, especially since with multiple ratings, you can find problematic sentences—for example:

The “86” is the line number from the LaTeX document, which is nice because it makes it easier to go find the appropriate sentence to fix. Here are some more samples of the kinds of responses I’m getting:

Overall, I think it’s a successful experiment, though it was already well known that MTurk workers can do editing tasks well, from soylent.