Chick Sexers To Be Automated: Are Lawyers Next?

Chick sexers were famously touted by Dan Kahan because they can do something very hard — distinguish male from female chicks- but allegedly can’t explain well how they do it. Moreover, becoming a chick sexer, like learning how to be a lawyer, radiologist, diagnostician, or other professional, involves repeated confrontation with error. As Kahan explained:

“[T]he lawyer attains her skill – to recognize what she can’t cogently explain – in much the same way that the chick sexer does: through exposure to a professional slideshow, this one conducted by law grandmasters, including law professors but also other socialized lawyers, who authoritatively certify what count as good and bad decisions, sound and unsound arguments, thereby inculcating in students and young practitioners the power of intuitive perception distinctive of the legal craft.”

But now, it turns out, chick sexing is going to be turned over to automated (objective) pattern recognition machines. As the Economist noted, this development is “sad for the redundant sexers … but you can’t make an omelette without breaking eggs.” It’s also, of course, bad for the baby boy chickens, who will be turned into pet food more efficiently.

More grimly, taking Dan’s analogy seriously, this is a scary reminder for lawyers, especially the kind who dominate the profession (i.e., transactional attorneys), that professional judgment can sometimes be automated, and must never be complacent. The only thing that lawyers really have on those poor chick sexers right now is a very entrenched guild.

Dave Hoffman is the Murray Shusterman Professor of Transactional and Business Law at Temple Law School. He specializes in law and psychology, contracts, and quantitative analysis of civil procedure. He currently teaches contracts, civil procedure, corporations, and law and economics.

First, Kahan’s description, even if it’s accurate about about some aspect of practicing law, isn’t accurate about how transactional lawyers learn judgment. Second, he has a point about “authoritative certification.” The distinction between arguments from authority and arguments from merit have something to do with what successful transactional lawyers learn despite their inculcation in legal education. (To what sounds like your concluding point, I agree that there’s a guild effect, but it manifests itself in an entrenched way of thinking, not necessarily its entrance requirements.) Third, more generally, if professional judgment can be automated, it would only be in the most trivial kinds of ways.

It’s not truly a case of lawyer automation — the machine doesn’t do exactly what the humans did. Rather, it’s an advance in process, effectively avoiding the judgment-call fork in the road by moving the determination to an earlier stage where it can be automated (estrogen detection in egg). I’m not sure what that means on balance. Maybe that the real danger isn’t that lawyers will be automated, but that the river will find a more efficient course that avoids the Lawyer Rapids altogether.

I’m with Jeff in rejecting the narrative of how transactional lawyers learn judgment. One might also think the recent brouhaha over Toyota’s possible software problems would reduce the enthusiasm for delegating judgment to bots. And since many negotiations are preludes to ongoing relationships between the principals, clients who’d entrust the necessary people skills to software ipso facto would be so clueless that they’d be beyond human help, in any case.

I suspect that — as in the movie, Blade Runner — we lawyers and law teachers already are automated and just haven’t realized it yet. Once we do realize it, then implementing the Carnegie Report on Legal Education will be a whole lot easier: We’ll simply install the next version of the software into our circuits (or perhaps directly into those of our students, making three years of Socratic questioning and exam-taking unnecessary).

I’m guessing your post was, as Jeff Lipshaw already suggests as a possibility, just tongue-in-cheek and to underscore the irony of chick sexers being replaced by machines only a few years after Dan Kahan mentioned the chick sexing as an example of an astonishing feat that happens by intuition alone and is not reducible to rules (As Wikipedia helpfully notes, “The ‘example of the chicken sexers’ is famous in several debates in philosophy, especially in the internalism/externalism debate in epistemology.” http://en.wikipedia.org/wiki/Chick_sexing).

Of course, the fact that one task that humans have learned to perform by intuition might also be performed by a machine — a task involving a binary choice (male or female) — doesn’t automatically mean that all such tasks, including those that are a whole lot more complex, are also ready to be automated. Richard Horsey suggests in “The Art of Chick Sexing” that chick sexing is an example, and only one example, of a process where “expert categorization does take place as a result of the recognition of specific features, but that since the process is not accessible to introspection, we are not aware that categorization is feature-based.” ( http://www.phon.ucl.ac.uk/publications/WPL/02papers/horsey.pdf ). It may be that some legal reasoning tasks will one day be taken over by machines, but it may take a bit longer than chick sexing to replace, and some components of legal reasoning may not be replaceable in this way at all — to borrow from Jeff Lipshaw again — in his discussion here of Cass Sunstein’s 2001, paper, Of Artificial Intelligence and Legal Reasoning
( http://lawprofessors.typepad.com/legal_profession/2009/12/artificial-intelligence-and-legal-wisdom.html ). Fortunately, even if computers can’t be lawyers, that doesn’t mean they can’t be clients. See Larry Solum, Legal Personhood for Artificial Intelligences ( http://papers.ssrn.com/sol3/papers.cfm?abstract_id=1108671 )

I suspect that — as in the movie, Blade Runner — we lawyers and law teachers already are automated and just haven’t realized it yet. Once we do realize it, then implementing the Carnegie Report on Legal Education will be a whole lot easier: We’ll simply install the next version of the software into our circuits (or perhaps directly into those of our students, making three years of Socratic questioning and exam-taking unnecessary).

I’m guessing your post was, as Jeff Lipshaw already suggests tongue-in-cheek and meant to underscore the irony of chick sexers being replaced by machines only a few
years after Dan Kahan mentioned the chick sexing as an
example of an astonishing feat that happens by intuition alone and is not reducible to rules (As Wikipedia helpfully notes, “The ‘example of the chicken sexers’ is famous in several debates in philosophy, especially in the internalism/externalism debate in epistemology.” http://en.wikipedia.org
/wiki/Chick_sexing

The fact that one task that humans have learned to perform by intuition might also be performed by a machine — a task involving a binary choice (male or female) — doesn’t
automatically mean that all such tasks, including those that are a whole lot more complex, are also ready to be automated. Richard Horsey suggests in “The Art of Chick Sexing” that chick sexing is an example, and only one example, of a process where “expert categorization does take place as a result of the recognition of specific features, but that since the process is not accessible to introspection, we are not aware that categorization is feature-based.” It may be that some legal reasoning tasks will one day be taken over by machines, but it may take a bit longer than chick sexing to replace, and some components of legal reasoning may not be replaceable in this way at all See http://lawprofessors.typepad.com/legal_profession/2009/12/artificial-intelligence-and-legal-wisdom.html

Thanks for everyone’s comments. Marc expresses what I wanted to say more articulately that I ever could. One thing to consider is whether and to what extent lawyers’ fees are over-emphasizing the amount of legal judgment used (as opposed to, say, implementation of judgment, which could be automated).