Archive: Jul 2005

Law Professor Michael McCann has an intriguing post in his sports law blog on age and arrest among basketball players. David Stern and others claim that teen draftees might get into more trouble than well-seasoned college kids. Based on his sample of 84 arrested NBA players, McCann concludes: (1) non-college players are no more likely to be arrested than other players; and, (2) basketball players get into just as much trouble mid-career (or end-career) as they do at the beginning. Comparing college guy JR Ryder with high-schooler Kevin Garnett in Minnesota, the first conclusion doesn’t surprise me at all. The second one, however, conflicts with most of what we know about age and crime. So I grabbed his data and plotted some curves.

First, let’s plot the raw number of arrests by age group. I tossed out the retired players (we can hardly blame the NBA for them, can we?) and grouped the arrests by age, FBI-style. I then plotted the curves on two separate y-axes for easy comparison. Just click on the graphs to bring up the full-size versions.

By the time they get to the NBA, most players are already past their peak offending years. Still, the NBA peak comes relatively early: 23 versus 19 for US men overall. This made me think about age and selection into pro ball. Sadly, there are alot more 23-year-olds than 41- year-olds in the NBA these days. So what we really need are age-adjusted rates. Here’s what happens when we standardize by the number of players in each age group (using 2003 roster data).

The age-adjusted rate shows the mid-career blip that McCann mentioned. Even without the retirees, older players do get arrested (e.g., Gary Payton’s recent DUI). Moreover, McCann seems to be spot-on about the comparatively clean records of 18, 19, and 20 year-olds in the NBA . Still the first two figures are somewhat misleading, since the x-axis shows single years for 16 to 24 but 5-year increments thereafter and the NBA arrests are plotted on a different scale than the US male arrests. So, here’s what the age-adjusted arrest curve looks like for NBA players versus the rest of us.

No wonder coaches keep a close eye on their 23- and 24-year-olds. This is the only piece of the distribution where ballplayers are more likely to get arrested than regular Joes. Why? I’m thinking that off-court activities of 18-year-olds center around in-room Playstation. By the early twenties, however, wealthy young athletes probably venture out of the team hotel a bit more. So, the data are a bit sparse, but I think there’s enough here to draw two conclusions: (1) the age-crime curve applies to the NBA as elsewhere (it might not be invariant, but …); and, (2) McCann seems well-justified in challenging the NBA’s age floor for 18-19 year-olds.

Law Professor Michael McCann has an intriguing post in his sports law blog on age and arrest among basketball players. David Stern and others claim that teen draftees might get into more trouble than well-seasoned college kids. Based on his sample of 84 arrested NBA players, McCann concludes: (1) non-college players are no more likely to be arrested than other players; and, (2) basketball players get into just as much trouble mid-career (or end-career) as they do at the beginning. Comparing college guy JR Ryder with high-schooler Kevin Garnett in Minnesota, the first conclusion doesn’t surprise me at all. The second one, however, conflicts with most of what we know about age and crime. So I grabbed his data and plotted some curves.

First, let’s plot the raw number of arrests by age group. I tossed out the retired players (we can hardly blame the NBA for them, can we?) and grouped the arrests by age, FBI-style. I then plotted the curves on two separate y-axes for easy comparison. Just click on the graphs to bring up the full-size versions.

By the time they get to the NBA, most players are already past their peak offending years. Still, the NBA peak comes relatively early: 23 versus 19 for US men overall. This made me think about age and selection into pro ball. Sadly, there are alot more 23-year-olds than 41- year-olds in the NBA these days. So what we really need are age-adjusted rates. Here’s what happens when we standardize by the number of players in each age group (using 2003 roster data).

The age-adjusted rate shows the mid-career blip that McCann mentioned. Even without the retirees, older players do get arrested (e.g., Gary Payton’s recent DUI). Moreover, McCann seems to be spot-on about the comparatively clean records of 18, 19, and 20 year-olds in the NBA . Still the first two figures are somewhat misleading, since the x-axis shows single years for 16 to 24 but 5-year increments thereafter and the NBA arrests are plotted on a different scale than the US male arrests. So, here’s what the age-adjusted arrest curve looks like for NBA players versus the rest of us.

No wonder coaches keep a close eye on their 23- and 24-year-olds. This is the only piece of the distribution where ballplayers are more likely to get arrested than regular Joes. Why? I’m thinking that off-court activities of 18-year-olds center around in-room Playstation. By the early twenties, however, wealthy young athletes probably venture out of the team hotel a bit more. So, the data are a bit sparse, but I think there’s enough here to draw two conclusions: (1) the age-crime curve applies to the NBA as elsewhere (it might not be invariant, but …); and, (2) McCann seems well-justified in challenging the NBA’s age floor for 18-19 year-olds.

I spent the last few days at the National Institute of Justice‘s annual research and evaluation conference, “Evidence-Based Policies and Practices.” The idea is to connect policymakers and practitioners to a broad class of “researchers” studying crime and Justice. Sociologists, even (or especially) public sociologists, tend to be cynical about applied/policy research, but this is one cool conference. A highlight for me was Del Elliott’s plenary address on his “Blueprints” model programs for violence prevention. In some ways, his presentation brought to mind James Coleman’s controversial “Rational Reconstruction of Society” 1992 ASA presidential address, or at least one example of the fruits of Coleman’s programmatic challenge.

Elliott’s group identifies model programs based on classic social science criteria (e.g., randomized trials, sustained effects, independent replication) and then spreads the seed. He argues passionately against sending kids through programs that are known failures (e.g., Scared Straight, early DARE, most boot camps); he even hinted that class-action suits could be filed against courts who continue to do so on grounds of negligence, if not malice aforethought. Mark Lipsey, the master of meta-analysis, explained how monitoring, training, and quality control (or “fidelity,” as they say in the business) can successfully replicate and sustain successful programs. [In evaluation research, it turns out that consistent implementation is just as important as what is being implemented. Most teachers know this; many teaching philosophies can “work,” but the absence of a philosophy or its inconsistent application usually fails.] He also offered evaluation strategies when practitioners go beyond the data –adapting a model program to a new target group or unusual local conditions, for example. Finally, organizations such as the Washington State Institute for Public Policy and individuals such as (RAND pioneer) Peter Greenwood are conducting increasingly sophisticated cost-benefit analyses to distinguish the best from the lousiest societal investments in public safety.

Of course, such social-sciencey attempts to systematize prevention and rehabilitation programs will surely discipline and punish some creative and difficult-to-evaluate efforts. That said, the progress in documenting successful programs has been astounding in the past decade — from the “What Works” report to Congress in the late 1990s to the Campbell Collaboration’s new library of clinical trials. When I received my Ph.D. in 1995, many experts were still arguing “nothing works” in corrections (and, one might add, “so what if it did”). Today, you’d be laughed out of the room if you made such claims. A real scientific basis for programs such as cognitive behavioral therapy and nurse home visits, for example, is now firmly established. A rational reconstruction of criminal Justice, of course, would further require that policymakers attend more consistently to the science. At least we are creating the preconditions for such action — a base of knowledge that simply did not exist in earlier eras.

I spent the last few days at the National Institute of Justice‘s annual research and evaluation conference, “Evidence-Based Policies and Practices.” The idea is to connect policymakers and practitioners to a broad class of “researchers” studying crime and Justice. Sociologists, even (or especially) public sociologists, tend to be cynical about applied/policy research, but this is one cool conference. A highlight for me was Del Elliott’s plenary address on his “Blueprints” model programs for violence prevention. In some ways, his presentation brought to mind James Coleman’s controversial “Rational Reconstruction of Society” 1992 ASA presidential address, or at least one example of the fruits of Coleman’s programmatic challenge.

Elliott’s group identifies model programs based on classic social science criteria (e.g., randomized trials, sustained effects, independent replication) and then spreads the seed. He argues passionately against sending kids through programs that are known failures (e.g., Scared Straight, early DARE, most boot camps); he even hinted that class-action suits could be filed against courts who continue to do so on grounds of negligence, if not malice aforethought. Mark Lipsey, the master of meta-analysis, explained how monitoring, training, and quality control (or “fidelity,” as they say in the business) can successfully replicate and sustain successful programs. [In evaluation research, it turns out that consistent implementation is just as important as what is being implemented. Most teachers know this; many teaching philosophies can “work,” but the absence of a philosophy or its inconsistent application usually fails.] He also offered evaluation strategies when practitioners go beyond the data –adapting a model program to a new target group or unusual local conditions, for example. Finally, organizations such as the Washington State Institute for Public Policy and individuals such as (RAND pioneer) Peter Greenwood are conducting increasingly sophisticated cost-benefit analyses to distinguish the best from the lousiest societal investments in public safety.

Of course, such social-sciencey attempts to systematize prevention and rehabilitation programs will surely discipline and punish some creative and difficult-to-evaluate efforts. That said, the progress in documenting successful programs has been astounding in the past decade — from the “What Works” report to Congress in the late 1990s to the Campbell Collaboration’s new library of clinical trials. When I received my Ph.D. in 1995, many experts were still arguing “nothing works” in corrections (and, one might add, “so what if it did”). Today, you’d be laughed out of the room if you made such claims. A real scientific basis for programs such as cognitive behavioral therapy and nurse home visits, for example, is now firmly established. A rational reconstruction of criminal Justice, of course, would further require that policymakers attend more consistently to the science. At least we are creating the preconditions for such action — a base of knowledge that simply did not exist in earlier eras.

The crimprof blog cites the LA Times on ex-offender job fairs. Such fairs are being organized all over the country, with mixed results. In this case and in some others I’ve seen, few employers or ex-offenders even showed up. Those who did attend, got good news (employers could get tax credits for hiring someone with a criminal record) and bad (many ex-offenders are ineligible for expungement). Such job fairs seem to be most successful in tight labor markets (e.g., 1999-2000 in most areas). On the employee side, turnout might improve by targeting current probationers or parolees, rather than former offenders who are “off-paper” and more difficult to mobilize. Mobilizing employers is more difficult, unless they face a labor shortage or former felons (potential “sponsors”) have a good track record in the firm or establishment. There are books and videos available for ex-offenders and organizations such as Chicago’s Safer Foundation have a long history of successful job development and placement for this group. Still, I tend to agree with Richard Freeman — the best jobs program is probably a full-employment economy.

The crimprof blog cites the LA Times on ex-offender job fairs. Such fairs are being organized all over the country, with mixed results. In this case and in some others I’ve seen, few employers or ex-offenders even showed up. Those who did attend, got good news (employers could get tax credits for hiring someone with a criminal record) and bad (many ex-offenders are ineligible for expungement). Such job fairs seem to be most successful in tight labor markets (e.g., 1999-2000 in most areas). On the employee side, turnout might improve by targeting current probationers or parolees, rather than former offenders who are “off-paper” and more difficult to mobilize. Mobilizing employers is more difficult, unless they face a labor shortage or former felons (potential “sponsors”) have a good track record in the firm or establishment. There are books and videos available for ex-offenders and organizations such as Chicago’s Safer Foundation have a long history of successful job development and placement for this group. Still, I tend to agree with Richard Freeman — the best jobs program is probably a full-employment economy.