To reiterate: nobody says GRE scores are perfect predictor. I also believe that their predictive ability is lower for some groups. But the point is not perfection. Th point is that the GRE sorta, kinda works and the alternatives do not work

Two comments: 1. I am intrigued. If the results can be replicated in other places, I would be thrilled. But so far, we have one (promising) study of a single program. Let’s see more. 2. I am still not about to ditch GRE’s because I am not persuaded that academia is ready to implement a very intensive in-depth interview admissions system as its primary selection mechanism. The Miller and Stassun column refers to a study of physics graduate students – small numbers. What is realistic for grad programs with many applicants is that you need to screen people for interviews and that screen will include, you guessed it, standardized tests.

Bottom line: The GRE is far from perfect but it is usable. There is no evidence to systematically undermine that claim. Some alternatives don’t work and the new proposed method, in depth interviews, will probably need to be coupled with GREs.

7 Responses

Is the GRE’s predictive power supposed to be linear? My anecdotal sense is that anyone with scores below, say, the 70th percentile is unlikely to succeed in a good program. But there is very little difference between someone with scores in the 70s and someone with scores in the 90s. I don’t think any of the studies you cite above deal with this problem.

Peter, we are taking about average effects across a wide range of programs. Econ programs are now like engineering programs. Only those with very strong math scores have a chance. But across the university, lots of programs can deal with more variation.

This argument convinces me that GREs are reasonable when applied to programs in which success is primarily based on coursework. But we know that most PhD programs have substantial dropout rates–and, as the Fisk and Vanderbilt study points out–the current system is very poor at predicting which students will stick around. Perhaps this is anecdotal, but I’ve known quite a number of people who left PhD programs, and I can tell you with complete confidence that not one of them left because “they couldn’t do well in the courses”–they left because they couldn’t or didn’t want to start a dissertation, couldn’t or didn’t want to finish a dissertation, couldn’t manage to find their way away from a terrible advisor, or discovered while in the PhD program that academe was not for them. The debate here is not, as you point out, whether GREs are a poor predictor. It is whether predicting grades in coursework is even useful for PhD admissions.

I think the interview idea sound phenomenal. Obviously the larger programs are unlikely to adopt this, but I’d like to see some smaller programs with a real serious commitment to diversity and to bringing disadvantaged students into the fold try it out. Not all applicants need be interviewed, of course–I think such an experiment could clearly discard those with low GPAs, no prior research experience, etc. And perhaps there is even a utility in a really de minimums GRE cutoff depending on discipline (for example, students must be in the top third on at least one of the sections).

The point to me is that graduate departments have a responsibility to solve this problem. Rather than relying on a tool which might work in certain limited circumstances, they need to find tools that actually work–unless, of course, Marc Bosquet is right and graduate departments have a real vested interested in accepting and then discarding students (see http://worksanddays.net/2003/File09.Bousquet_File09.Bousquet.pdf).

Mikhaila,
I definitely saw fellow grad students leave the program because they couldn’t do the course work. Very few left the program after the course work was finished (I can only think of one person). No one seemed to leave because they didn’t want to write a dissertation or thought academe wasn’t for them. Of course, I am in economics, and math ability drives success in the course work and a PhD doesn’t mean you are headed for academe.

GRE predicts problem-solving skills and most items under the big umbrella of academic aptitude, but it does not predict one’s social skills, ambition, motivation, and what not. Net of other factors, it is a great predictor, but unfortunately, we cannot parcel out other factors. Like in many other areas, who are most likely to be successful in academia? Those who have good social skills with medium to upper-level intellect.

We could compile data from our old admissions process that could address some of these issues. An old study in my department found that “committee rating” had a positive effect on success net of test scores and grades, which is presumably capturing the “other” factors that the committee was noticing when they rated. (Test scores and grades also had independent effects in that old analysis.)

There is also the problem of disentangling highly inter-correlated factors: family SES, test scores, grades. Test scores are very highly correlated with parents’ education and children of highly educated parents do enter school with a large head start in the human capital race. Test scores net of parents’ background probably are capturing differences in ability, but in a graduate admissions pool you mostly don’t have the data to distinguish the two effects.

High test scores and low grades are almost always a bad bet: these tend to be smart people who just cannot get the motivation or self-discipline to get anything done.

I personally know a number of very successful sociologists who had low test scores at admission, and know lots of students with high scores that were wash outs. Which is not to say the correlation isn’t there, but there’s a lot of error around that coefficient.

Agree that letters of reference are mostly useless.

It was my feeling reading files that the writing samples were the best way to tell who really had some intellectual spark versus who was just following a template, especially if what they submitted was a major paper.