Enter your email to subscribe:

Over at Conglomerate, native Wisconsin cheese head Gordon Smith has been posting about his study of artisan cheese makers. I wish I had something blog-relevant to say about this,
but here in northern lower Michigan, the big thingischerries. At left you see a tub of these lovelies, tart Montmorency cherries picked from our cherry tree in front of the house (this morning, by yours truly, on an OSHA-compliant twelve-foot ladder) and now destined for a cherry crisp. (They are much redder than this picture makes them appear - my lousy cell phone camera.) If you happen to be in the neighborhood, feel free to stop by and pick a few yourself (hurry, the birds are eating them) after signing the appropriate waiver of liability.

In his introductory essay to Paradoxes, Oren Perez (Bar-Ilan) makes a point about rational calculation, in the context of the Learned Hand formula for negligence, that had never occurred to me, and which seems to make sense. (I invite anyone to explain why it is wrong!) This has broad application because it gets at the heart of the core relationship between the ex post outcome of cases (like Cerberus' "lessons" on eliminating ambiguities in drafting) and the ex ante calculation in respect of that outcome that lawyers (those most rational of actors) are supposed to make.

Perez's argument goes like this. The potential tortfeasor, informed by the case holdings, knows that she will be liable for the injury she causes if the cost of precaution is less than the probability of an accident times the magnitude of the accident. For the model to work, it has to assume that potential tortfeasors and judges are perfect welfare maximizers with perfect information. But information and deliberation are not costless. So maximizing actors need to make a decision about whether to invest costs in obtaining the necessary information and spending the time deliberating about the choice. That decision is itself not costless; one needs to gather information about whether gathering information and deliberating is a fruitful way to spend one's maximizing time. And so on to the infinite regress.
This appeals to my intuition in the same way as, and seems to be related to, at least analogically, the idea that rules cannot determine their own correct application. (If there were a rule for the application of a rule, then what would the rule be for the application of the rule for the application of a rule, and so on to the infinite regress.)

Perez's conclusion is that this is why we have rules of thumb for deciding what to do - they sit somewhere between unsatisfying calculation and pure intuition.

Inside counsel, as employees of the firm, are inclined to take orders and accept the “definition of the situation” (a phrase coined by Milgram) from their superiors. These superiors happen to be a cohort of non-lawyer senior managers vested with the authority to speak on behalf of the organization and entrusted to give direction to inside counsel. They create the reality for inside counsel: they define objectives, identify specific responsibilities for inside lawyers and, ultimately, determine whether an inside lawyer’s performance is acceptable.
And accepting management’s “definition of the situation” means accepting management’s framing of the inside lawyer’s role and responsibilities.

This framing provides that compliance responsibilities be segmented. Although inside counsel’s duties include a prominent role in corporate compliance, it is business management that jealously guards the right to decide whether to comply with the law, which is seen as the ultimate risk management decision. For inside counsel to challenge management’s decisions or management’s authority to make decisions would then amount to clear insubordination.
Obedience in the corporate context will be substantial, so we should not be surprised by the banal tendency to listen to superiors.

Full disclosure. I spent eleven years of my career as an in-house lawyer, so it's entirely possible that I resemble that remark. (Professor Kim can also call on real-world experience as outside and inside lawyer, and in fairness, her very thoughtful and interesting Fordham Law Review article on the subject, which I recommend heartily, is more nuanced than the blog post.) But I'd be a lot more comfortable accepting this sweeping conclusion were it made on broad empirical evidence of actual in-house lawyer conduct rather than on what appears to be a combination of inference from the Milgram conformity lab tests and well-known examples of lawyers behaving badly. I knew a lot of in-house lawyers, and while I can't say how they would have performed in the electric shock tests, and can't deny the impact of framing on decision-making, I sure saw a lot of thoughtful and courageous pushback to management on lots of legal and moral issues. Indeed, my casual observations were that individual moral choice and leadership in context, while certainly more elusive in its measurement, showed up more than just from time to time. I can't determine whether that was the exception or the rule. Indeed, I applaud the coda to Professor Kim's bio: "I tell my students that there are two questions that every lawyer should ask when counseling a client about a proposed course of action. The first is: 'Is it legal?' The second is: 'Is it right?'"
But how do you make that call?

I struggle with the line between psychological "truths" and moral free agency. I am willing to accept the conclusion that we are hardwired to seek and justify physical and material well-being, and hence, a natural inclination for people, not just lawyers, is to comply and avoid conflict. I don't like, however, blanket statements about in-house lawyers doing this and that, and having this and that tendency. If I may engage in another exercise of shameless self-promotion, the point of my piece, Law as Rationalization: Getting Beyond Reason to Business Ethics, was to explore the difference between lawyers using reason to justify a desired material world outcome, and lawyers using reason as autonomous moral agents trying to discern ethical obligation.

The implication is that I don't think you can change things by incentives (more cheese for the rats). My answer is there has to be personal engagement in a continuing struggle to ask questions with the hope of getting answers along the way. To borrow from Robert Louis Stephenson, sometimes it is better to travel hopefully than to arrive.

We have hanging in our living room three prints signed by Rivera, part of a collection of ten he gave to my wife's grandfather, Nathan Milstein, a lawyer in Detroit, who did work for and befriended Rivera and Kahlo. (Family legend has it that Kahlo made a pass at him, but this is unconfirmed.) Nathan was born in 1907, graduated from Detroit Central High School in 1924, and attended the Detroit College of Law (then the Detroit City Law School and now the Michigan State University College of Law) and Wayne University Law School, receiving his LL.B. at age 21 in 1929. Nathan passed away in 2003, having continued to practice until his late eighties, and his seventy-four year tenure as a member of the bar is supposedly one of the longest in Michigan history.

Alene and I spent many hours going through his voluminous files. One truly appreciates the historian's and the biographer's art of distilling the story from the data when looking at records like these. The documents are tantalizing. For examples, Nathan was a bachelor until 1946, when he married Alene's grandmother, who was a widow with two children. Before that, he was supporting his mother and sisters. When the war broke out, he tried for years to find a way to serve without being drafted as a private (which in 1941 paid $21 a month, not enough to support the family.) Ultimately he found a job as a civilian flight instructor, but the file of letters and rejections to almost every branch of the military and government agency is about two inches thick. I have framed in my office my personal favorite: the letter signed by John Edgar Hoover advising Nathan he had failed the F.B.I entrance exam, which I had first interpreted as having been on account of Nathan's being Jewish while taking it.

The Rivera piece inspired me to go back through some of the files this morning (a quiet Christmas task). I realize now it's entirely likely Hoover objected to Nathan not only because of his ethnicity, but also because he consorted, in the course of his immigration practice, with all sorts of "undesirables," and espoused public positions to which the F.B.I. director of long memory must have objected.
As to his practice, I'm just now organizing a series of correspondence relating to his representation in late 1932 of one Halvard Lange Bojer, the son of noted Norwegian author, Johan Bojer. The younger Bojer, an engineer who had emigrated to the U.S. in 1928, was working for General Electric in Fort Wayne, Indiana, when he was arrested by the Immigration Service, and transported to the Wayne County Jail in Detroit, on the grounds that he was a member of the Communist Party. Bojer himself described it to a reporter as follows: "They tell me that I'm a Communist. . .It so happens that I'm a member of the Communist Party Opposition, whose headquarters is in New York. Members of that Party, though glad to take Moscow's advice, refuse to take Moscow's dictation. There are other differences, such as our belief that the worker's solution is in the organization of a Labor Party, comprised of Trade Unions, similar to that of England. Also, we disbelieve in Moscow's theory that existing labor organizations, such as the A.F. of L., should be wrecked for the formation of Communist units."

The American Civil Liberties Union attempted to intervene on Bojer's behalf. (I can't tell if Nathan was already representing Bojer or if the ACLU retained him on Bojer's behalf.) On December 12, 1932, Roger Baldwin, the ACLU Director, wrote to Nathan, urging Bojer to fight deportation as a test case. Baldwin stated: "The issue is far more than personal to him. This is the first case, so far as we are aware, when a member of his particular Communist group has been held for deportation on the ground of membership. It is worth fighting through because it offers a test of the application of the law to other than members of the Communist Party." Nathan met with Bojer in the Wayne County jail, where Bojer, "a very affable and highly cultured young man," advised that he had no desire to appeal the deportation, and was willing to return to Norway. He was released pursuant to a bond posted by his friends in Fort Wayne, and joined an "East bound deportation party" on December 29, 1932.

As to Nathan's political views, here's an excerpt from his tribute to Judge Arthur C. Denison on the occasion of his retirement from the 6th Circuit Court of Appeals in January, 1932:

Humanizing the enforcement of existing laws relating to admission and deportation of aliens has become a serious problem confronting social leaders throughout the country. In the present delirium of unemployment when a vague terror seizes the nation, this fear is translated into alien hatred. Public discontent must be directed away from the cause of the unrest and to accomplish this, a counter irritant is administered. The ever oppressed alien is again victimized. The term alien becomes synonymous with undesirable. Deportation "drives" and "spectacular raids" then become common occurrences. Wholesale deportation follows as a panacea for what ails the nation. This national hysteria influences the action of public officials and finds expression in more rigid and relentless enforcement of deportation laws. Even the courts are sometimes swept into the whirling cyclone, marring the annals of juridical science with unprecedented decisions. To espouse the cause of the under-privileged requires great courage. Those who bear the courage of their convictions and refuse to be swayed, belong to the school of Holmes and Brandeis. So few do they number that a loss in the ranks is keenly felt by liberty loving citizens.

Just an ordinary kid from an ordinary school in an ordinary city. Whose parents had been aliens.

When I was at Tulane last year, I got a call from the Times-Picayune to comment on what has now become this story about the Fifth Circuit's recommendation that Federal Judge Thomas Porteous be impeached. The issue on which I was asked to comment was the propriety of an alleged $1,000 hunting trip to which the judge was treated by a defendant company in a pending maritime injury case, and which was not disclosed to the plaintiff.
Looking back at my comments, I now recall what seemed so odd about the whole thing.

"Federal judges by and large have the reputations of being absolute paragons of integrity," said Jeffrey Lipshaw, a visiting professor at Tulane University Law School. "The perception is that they bend over backwards to avoid even the appearance of impropriety."

Lipshaw said Porteous, who makes $165,200 a year, might have considered the value of the excursions so trifling that they would not be seen as swaying his conduct in court. If the judge thought there was something improper about the trips, Lipshaw said, why would he disclose them on his financial reports, which are submitted to the Judicial Conference and remain public record for five years?
* * *
"It is entirely possible that the gifts in fact did not influence him," Lipshaw said. "But even if in your own mind you know they did not make any difference, and you are just as likely to rule for or against on the merits, the very reason it smells funny is the reason you should not do it."

Yes, why take the tiny benefit and then disclose it? Assuming the allegations are borne out, this is not as simple as saying a person is crooked. I see the option backdating issue the same way. You have managed either by frame of reference (model or game?) or by internal advocacy (call it rationalization) to put aside that moral tickle ("hmm, should I take that hunting trip when I have a case pending with the company; gosh, it's only a $1,000 and I will disclose it on my yearly report?" or "hmm, what's wrong with creating a document that says the options were granted when they weren't; I'm just correcting what is a stupid accounting anomaly?")
David Brooks had an insightful New York Times op-ed on Barack Obama a few days back, and I think piece captures the essence of the theme. Your sense of right and wrong has to predate and transcend the context or the frame. Brooks observed: "Many of the best presidents in U.S. history had their character forged before they entered politics and carried to it a degree of self-possession and tranquillity that was impervious to the Sturm und Drang of White House life."
You can make an argument for anything, but there's still that smell test.

I haven't been blogging much over the last month or two (I will be guest blogging over at Concurring Opinions in December, however), leaving Mike Frisch with the laboring (and probably far more useful) oar. I have to admit that some of my inactivity has to do with things like Twitter and Facebook and MySpace, which aren't blogs, but are simply more information than I care to have about just about anybody. So I figure that unless I have something to say on a subject, I'll do everybody a favor and keep a log of my daily activities, as illuminating as they may be, to myself.

Okay. So much for my curmudgeonly rant. Here at Suffolk we have a wonderful set of clinical offerings under the direction of Professor Jeff Pokorak
(right). We were talking to someone the other day about our juvenile justice clinic, and the problem of burn-out among Legal Aid lawyers who represent juvenile clients in the system. I wondered how much burn-out had not to do to with the overwhelming amount of work without sufficient resources, but instead the ultimate futility of trying to hold back the ocean of a broken component of society on a case-by-case-by-case basis.

I have compiled a reading list for December, and one entry is Charles Taylor's A Secular Age. (This is quite a commitment, given that there are 776 pages of text.) The thesis, though, is fairly simple, and given to the reader in the first twenty pages. Why is it so easy in 2007 not to believe in God (at least in the North Atlantic world with which he is concerned) when in 1500 it was almost impossible not to? He proposes three concepts of secularity and focuses on the third: (1) the decline of religion in public spaces (i.e. the separation of church and state); (2) the decline of religious practice; and (3) the development of a culture in which it is acknowledged that there are many routes to spiritual "fullness" (Taylor's term) one of which is an exclusively humanist or secular. It seems to me that the whole notion of futility is a modern and secular one, captured by Taylor's description of a whole class of "unbelievers" (i.e. those who no longer believe in God as one might have believed in 1500), who nevertheless live the experience of something like nostalgia for the transcendent as a basis for fullness. To put it more simply, futility arises from a kind of cognitive gap: between the understanding that it's entirely possible nothing will ever make a difference, and the desire to be fulfilled. If you have no particular expectation of fullness, on one hand (see pragmatism, atheism, skepticism, post-modernism), or you are positive in your belief that everything DOES make a difference (see fundamentalism), I suspect futility is not an issue for you. But in between the assurance of a transcendent truth and an unawareness or rejection of anything but the material there is the possibility of futility.

So you just stand in awe and admiration of people who slog through it all day by day, plugging holes in the dike, or pushing back the ocean, wondering how they keep at it. Or the cosmologists like Andrei Linde at Stanford (right) working the question of the origins of the universe knowing they will never
know if their theories, like inflationary cosmology, are correct. Or I suppose, in a comparatively trivial way the futility of my own intellectual endeavor, which is to keep proposing answers to imponderable questions, even though I know none of the answers will suffice.

A loyal reader dropped me a note today wondering if I had disappeared. Nope. Just
busy. On the occasion of my flying off to Ann Arbor to see my son and attend the Michigan-Ohio State, I have decided to come out and reveal that, yes, in fact, I am Michigan Man. [Take that, Rapoport.]

A good friend, a great lawyer, and an exceptional human being, Kathleen McCree Lewis, passed away on
Tuesday after a long bout with cancer. She was the head of the appellate litigation section at Dykema Gossett in Detroit. President Clinton twice nominated her to the Sixth Circuit, but the nomination stalled in the partisan clash over federal judicial nominations at the end of his term.

She was one of the finest thinkers and writers I ever met, and she and her husband, David, truly one of the most elegant couples ever to walk the earth, but most of all, she was a person who cared very deeply about everybody around her. After I had left Dykema and become the general counsel at AlliedSignal, I hired away from Dykema one of its star lawyers, and one of Kathleen's proteges. I called her a few days later, and there was long silence. It was clear she was very angry at me. Finally she said, "you better be very, very good to him, or you will have me to deal with."

Having just returned from the Midwestern Law and Economics Association conference, and having this morning read Rob Kar's great first post on PrawfsBlawg (what is he going to do for a follow up to that?), I was reminded again of the fundamental question Tevye the Dairyman, the protagonist of The Fiddler on the Roof, raised about interdisciplinary studies. Tevye, in advising his daughter about the problems of inter-marriage, says "a fish could marry a bird, but where would they live?"

The myth of horizontal organization is that you can keep a business organization dynamic and growing merely by agglomerating value-creating specialties. But if that's the case, it's like fish and birds, and who sees the places where neither of them live? Either everybody is responsible for the gaps between specialties (which means nobody is responsible) or nobody is responsible.

My talk at MLEA dealt in the broadest sense of trying to use algorithmic economic models to map linguistic or moral models. That is, can you draw legal policy conclusions by trying to cast what the parties mean in a contract into the equations of welfare economics so as to resolve disputes about contract interpretation in an economically efficient way? While I'd say about 40% of my time on this over the last couple weeks has been devoted to refining the point I was trying to make, the other 60% was devoted to what is essentially translation. My first attempts, thoughtfully critiqued by colleagues Eric Blumenson and Andy Perlman, were largely cast in terms of the jargon of philosophy of language and cognitive science, and I thought we made great strides in bringing the ideas to a common denominator of relatively plain English (albeit plain English with words I made up). Nevertheless, I have reason to believe I was not entirely successful (nor unsuccessful) in communicating with the audience.

On the flip side, there were portions of the conference - mostly those with complex equations - as to which I might as well as been have been listening to a talk in French. I would have understood enough of the syntax and the occasional words or English cognates to be able to say, with about this level of specificity: "they are talking, I think, about wine, and either about its price or the tannin levels."

Which brings me back to the subject of Rob Kar's post, about which I have great passion. He's responding to the response by Brian Leiter and Michael Weisberg to the recent convergence of law and evolutionary biology, which they criticized. Now, again, we have a translation issue, but I read the Leiter/Weisman critique as saying evolutionary biology has yet to show it is capable of shedding light on the "non-plasticity" of behaviors, such that they might be the subject of legal policy. I interpret non-plasticity as the behavior being fixed, or rigid, or hard-wired, or universal in a particular circumstance, as shown biologically, such that we might have confidence that the generalization in a legal rule is neither under-inclusive or over-inclusive. I think Rob agrees with that (as do I), but his broader point goes back to how fish and birds, or sub-specialties, might learn to talk to each other, much less live together.

The point is the myth of the horizontal organization. A new discipline that fits in between the cracks of the old ones needs to adopt its own rigorous standards, but they won't be the standards of any of the contributing disciplines. I particularly took to heart Rob's inclusion of the philosophy of science and an analogy to meta-ethical thinking in the mix of disciplines that might inform this venture. Particularly as to the latter, without a good dose of thinking about thinking, the project will never be more than the sum of its parts.

I've started but not finished a wonderfully creative piece by my colleague, friend, and office next door neighbor, Jessica Silbey (left), The Mythical Beginnings of Intellectual Property Law, forthcoming in the
George Mason Law Review, but it was only this morning that I reached back to the very infancy of my academic life, and concluded that what was old is new, and, I suppose, vice versa.

If I may grossly oversimplify, Jessica's ambitious thesis is that utilitarian (read: economic) theories of intellectual property law do not fully account for its importance. She posits a narrative significance to creativity, supported by intellectual property rights, as a form of the "origin myths" or "origin stories" (I think of Horatio Alger, or George Washington and the cherry tree, or Abraham smashing the idols) that serve as models for human behavior and give meaning to our lives.

There is an inescapable link between my first dive into academic rigor as undergraduate some thirty plus years ago in the vibrant history department at the University of Michigan, and Jessica's call for narrative. I still recall the graduate student instructor (my long time friend Andy Achenbaum) in the first session of the small section of my first U.S. history course describing the paper requirements, and telling us that we should think of them as "legal briefs." As I had no idea what a good history paper nor a good legal brief looked like, it was not, at the time, particularly helpful advice. But I know now that all scholarship, implicitly or explicitly, makes an argument linking data through some structure or process of theorization.

The hot topic back then (mid-1970s) was the call to import social science methodology into historical analysis, as a (or the) way of making that argument. Another of my professors, Robert F. Berkhofer, Jr., had then recently written a book entitled A Behavioral Approach to Historical Analysis, a call to employ historiographical methods that pierced through the possibility of myth-making by understanding the roles of actors and interpreters in the writing of history. It was a reaction to the interpretive or narrative nature of the study of history, which had no doubt as much to do with the time and place of the narrator as it did of the actors. (The example I recall most vividly was that Arthur Schlesinger's The Age of Jackson seemed to import a fair amount of the The Age of Roosevelt, reflecting as much the author as the subject.) That is, to what extent were historians writing history, versus writing the Great Stories?

I was separately, and for my own purposes, trying to construct what had happened to Berkhofer's thesis about social science methods in history, and did a Google search this morning. I came upon a review, authored by Thomas Haskell at Rice, of Berkhofer's 1995 book, Beyond the Great Story: History as Text and Discourse (Harvard University Press, 1995). (The review is "Farewell to Fallibilism: Robert Berkhofer's Beyond the Great Story and the Allure of Post-Modernism," 3 History and Theory 347 (1998)). Now, I have not read the book, only the review, but it serves my point here just as well. The review was devastating, but, despite my fond memories of my time with Professor Berkhofer, I have to admit I was sympathetic to its point, which was essentially this: there's nothing like the reaction of the disappointed absolutist (read: Berkhofer the behavioral theorist) who despairs of his theory, and proceeds from rigorous causal explanation to a rejection of all theory with no stop in between for the possibility that life (read: history) is too complex either for algorithmic solution or complete deconstruction.

From our door post discussions, I suspect that Jessica herself has little patience for my meta-thinking about how academic or practicing lawyers think in models. But it seems to me that the same unresolved (and if Haskell is to be believed - and I think he is - unresolvable) issues of historiography, the perseverance of the old antinomies like explanation and understanding, of empiricism and intuition, prevail in the legal briefs we want to write as legal academics. This paragraph of Haskell's review of Berkhofer stopped me in my tracks:

The lamentable inadequacy of the so-called "modernist paradigm" turns out to be that it will not reduce to an algorithm. On [Berkhofer's] account, the normal paradigm makes of historical inquiry a fallible project, the crucial features of which cannot be embodied in any set of explicit instructions, or be carried out in any fixed mental mode. It requires of its practitioners that they be nimble enough to shift mental gears as the intellectual terrain varies and to juggle alternative modes of thought, which may pull in different directions. They must even dare to make judgment calls, with no guarantee of being right and every prospect of being criticized. Rather than declaring history to be purely an art or purely a science, the conventional paradigm assumes that historical inquiry, like life itself, displays elements of both. Indeed, it assumes that the mental repertoire of the historian differs in no deep, fundamental way from that of common sense, which is eclectic through and through. This strikes Berkhofer as intolerably messy and methodologically promiscuous, a project bound to fail because it naively encourages crossbreeding between different species of thought.

So here's a toast to the "intolerably messy" and the "methodologically promiscuous" as reflected in Jessica's new piece, at least as it stands as a humanities approach in contrast to the prevailing social scientism of the legal academy. And to more crossbreeding between different species of thought.

A week or so ago, I referred to an essay by the Israeli philosopher, Joseph Agassi. As I sit here
(procrastinating) with a stack of nine books (not articles) I want to read, not including Charles Taylor's A Secular Age, which just came out and is almost 900 pages, but which I have yet to order, I take some heart from Professor Agassi's advice in his essay, Scientific Literacy, on the art of browsing, which I recommend browsing. Except when you are browsing, don't skip the introductory paragraph from this student of Karl Popper:

The central end of all my research activities was the effort to break down the walls of the academy. The wall is defended by the idea that not only do experts possess knowledge beyond the ken of lay people, which is trivially true, but that there is an unbridgeable gulf between the two. The aim of this presentation, then, is to discuss the possibility of building a bridge between the ordinary educated citizen and the expert.

This is apropos to legal academia, in particular, for three reasons: (1) the issue of walls and the breaking (or construction) thereof implicit in "law and..." disciplines; (2) the particular position of legal academia between scholarship (the expert?) and professional training (the ordinary educated citizen?), and (3) the fact that most of us, experts and ordinary educated citizens alike, are in fact simple ordinary educated citizens with respect to MOST of what we know, as almost any Tuesday or Thursday law school faculty lunch presentation will demonstrate.

If it happens that you read this blog and not Tax Prof Blog, then you are missing Paul Caron's series in which he asks "legal luminaries" to state the single most important piece of advice for the reform of legal education they would give Erwin Chemerinsky, the new-former-new dean of the new law school at UC-Irvine. Paul was lacking for luminaries one day, and asked me to respond, which I did, with my usual incomprehensible and stratospheric flair.

It's always interesting when something you are studying is the subject of popular commentary. The New York Times "Week in Review" today has an article on algorithms and the interactions between humans and computers that turns on the Turing test (developed by British code and cybernetics genius Alan Turing, the breaker of the German Enigma code at Bletchley Park during World War II). The ultimate
test of artificial intelligence is whether a tester corresponding with a computer cannot tell if her correspondent is a computer or a human. (The article points out that the word "algorithm" comes from the name of the ancient Baghad scholar ibn Mus al-Khwarzimi, pictured on a Russian stamp at left.)

I happened to be reading an essay, "Analogies Hard and Soft" by Joseph Agassi (Tel Aviv University, Philosophy, right) (in David Helman's 1988 anthology, Analogical Reasoning: Perspectives of Artifical Intelligence, Cognitive Science, and Philosophy, which I picked up from Cass Sunstein's citation to it in his 1993 Harvard Law Review article on analogical
reasoning). The essay also addresses the Turing test and the limits of artificial intelligence. To summarize, analogies fascinate because they waiver between "hard" copies or forgeries of something else, at one extreme (e.g. a computer simulation so good that it cannot be distinguished from a human correspondent, or a piece of expert art forgery), and wispy soft comparisons: "they are vague in the limits of their applicability, they are suggestive, they are not simply vague and indefinite, they stimulate one's thinking, they offer possibilities which scintillate between promise and disappointment." Here is the metaphysical issue Agassi raises: is it possible to program so powerfully that it replicates all possible human (i.e. brain) programming? The problem is one of self-reference and infinite regress. The Turing test requires a tester. If the tester concluded that the program was the ultimate copy, then the program should also be able to replicate what the tester just did. But that would mean that there had to be a "meta-program" to be tested now by a "meta-tester." And so on.

Indeed, the natural hypothesis here, that no program will ever be able wholly to replicate the power of the brain, particularly as it relates to creation or inventiveness, is, as Agassi points out, an inductive and not a deductive conclusion. Showing that a computer cannot replicate something, for example, as noted in the Times article, image or sound recognition (try posting a comment to this blog for an example of that!), merely supports but does not prove the thesis.

I suppose I ought to conclude this with a couple of implications that make this somehow relevant to lawyers. The point being made by the Times story is not: it is about systems that make up for the failure of the perfect algorithm by incorporating human brains into cybersystems, thus taking advantage of what human brains (creativity? inventiveness? analogy?) and computers (speed of processing) each do best. But a point made by Agassi is: the vagueness of analogy finds its practical frustration in the determination of patentability. "[H]ow inventive should a forgery be in order to make it patentable? . . . Michael Polanyi, the famous philosopher of science who took expertise as axiomatic and undefined, has claimed that no formula can be given to justify the patent-tester's decision." Indeed, Agassi notes the infinite regress of the attempt to use algorithms in the patent office; once a court states the formula, "competitors forge new forgeries by what is technically known as going around the patent, i.e. varying it trivially but with sufficient significance using formulas accepted by courts. Courts may take notice and improve their formulas, but not in retrospect!"

That's the suggestion of Michelle Morris, Lecturer in Law and Research Librarian at the University of Virginia Law School in a piece over at the Yale Law Journal Pocket Part in a reaction to "L'affaire Trustafarian" involving Boalt and Hastings after the Virginia Tech tragedy.

Alan and I both expressed views similar to those of Michelle back when the issue was hot - law students need to understand that they become lawyers, and are held to the standard of lawyers when they get to law school, not just when they graduate. The question back then was whether the Boalt student at the center of the controversy would be obliged to disclose the contretemps in his or her bar application. Michelle goes two steps further by suggesting not only the bar application but the law school application require the disclosure of any screen names or aliases used by the applicant.

I'm not sure how I feel about the suggestion. Requiring disclosure of online activity while one was a law student, at least after having been given the kind of warning some schools are now giving (I believe including here at Suffolk), does not seem too draconian to me. But I'm not sure it's fair to go back to what one did as an eighteen year old, and in any event, do the costs outweigh the benefits of that?

When I saw Brian Leiter's teaser title "The Worst Jurisprudential Article of the Year?" with nothing but a link to the "winner" (the complete discussion is on our sister blog "Brian Leiter's Legal Philosophy Blog"), I have to admit a moment of "pang" wondering if it would be something I wrote. But it really should be qualified by what I suspect is the sample set: jurisprudential articles written by people who are moderately important in the field. Thus disqualifying me!

Brian's target is a thirteen page essay by Steven D. Smith at San Diego (right), the author of Law's
Quandary, a book I very much enjoyed for its probing of the "beingness" or ontology of the law. In short, Law's Quandary asked this question: if we are all now legal realists, understanding the instrumental aspect of law, and positivists, understanding that law is what courts, legislatures, and other authorities say it is, why do lawyers still argue about the results as though the LAW were immanent and yet simply to be discovered and applied to the present dispute?

I had not read the essay, but did quickly after reading the review. Perhaps I was inclined to be more charitable because Law's Quandary gave me pleasure, because the new piece was short, and because my own view of short thought pieces posted on SSRN is that they are not papers, but pieces in the spirit of Brian's introduction to his blog: "'thinking out loud' in the sense that they won't be
polished or heavily revised, and thus no doubt replete with errors and
misunderstandings." Indeed, the line blurs between a very thoughtful blog post, and a quick thought piece on SSRN, and spending too much time on which is "scholarship" is probably just the kind of angel-counting we'd all like to avoid.

So here is my more charitable take on what Professor Smith had to say, with a nod to what I think made Professor Leiter uncomfortable (although he certainly doesn't need me to help him do that!) Outside of legal philosophy, there is a sense among some, similar to what Professor Smith described, that analytical philosophy of the last century is arid and fails to get at what attract many to philosophy in the first place - addressing ultimate questions. I think that's the simple point being made in the piece, and it is a call for legal philosophers to, in the words of Holmes' flourish: "connect...with the universe and catch an echo of the infinite." Just because we are lawyers does not exclude us from the human condition of self-reference, and making sense of the world. Coming from a quarter century of practice, I don't think a little reflection on meta-issues is such a bad thing every once in a while - particularly when one is sorting through significant pragmatic issues of the legal and the ethical. No less a legal philosopher than Martha Nussbaum (Flawed Foundations: A Philosophical Critique of (a Particular Type of ) Economics, 64 U. Chi. L. Rev. 1197, 1214 (1997)) had this to say about the Posnerian rejection of moral philosophy:

Aristotle
thought that there was conceptual progress in political thought. For
when we sit down and sort through all the good and bad arguments our
major predecessors have made, we will learn a lot: “Some of these
things have been said by many people over a long period of time, others
by a few distinguished people; it is reasonable to suppose that none of
them has missed the target totally, but each has gotten something or
even a lot of things right.”
Furthermore, we will also be enabled to avoid their errors. Finally,
perhaps, we will ourselves make a little progress beyond them.
Aristotle also noticed, however, that the passion for science and
simplicity frequently lead highly intelligent people into conceptual
confusion and an impoverished view of the human world. So he did not
think that progress was inevitable, and one of his great arguments for
reading was that it could remind us of conceptual complexities we might
otherwise efface, in our zeal to make life more tractable than it is.

Science
does not have to be impoverished; in fact, it must not be, if it is to
deliver perspicuous descriptions, adequate predictions, and, perhaps,
helpful normative recommendations. But Law and Economics is currently
still somewhat impoverished. It is impoverished because it did not
proceed in the way that Aristotle recommends, sitting down with the
arguments of eminent predecessors to see what can be learned from their
years of labor. Let us hope that this process will soon begin. There
would seem to be no better place for it to begin than in Chicago.

A piece from my friend Susan Neiman, (left) author of Evil in Modern Thought, director of the Einstein Forum in Potsdam, and most recently a member of the Institute for Advanced Study School of Social Science in Princeton, underscores both the invitation to do a little philosophizing and Professor Leiter's concern about it. She relates this from a fellow grad student: "Asked on a tour of prospective graduate programs why he'd chosen philosophy, he answered 'Well, like most people I read Nietzsche and Sartre in high school and just wanted to go on.' His interlocutor, a hard-nosed defender of classic analytic philosophy, responded,'Yes, but most people grow out of that.'" Her point was that the traditional philosophy curriculum causes "many students simply [to] take up subjects like history or politics or literature, which have clearer connections to the questions about meaning, and how to live, that sent them to philosophy."

Nevertheless, and this is where I think Professor Leiter has a point, it's a walking a fine line to focus, particularly in teaching, on the search for meaning, and not propose an answer that has the sniff of dogmatism to it. Professor Neiman says: "George W. Bush's faith that Providence will right what all the odds say will go wrong is a terrifying example of the sort of thing that gave Providence a bad name. If we reject such faith - or even more thoughtful versions of faith tout court - how can we ask our students to take 18th century appeals seriously?" I was uncomfortable with the suggestion at the end of Law's Quandary that the hypothetical author of the law is really an Author, and Justice Scalia, in his First Things review of the book, chided Professor Smith for beating around the bush; in so many words, "just say the Author is God!"

To summarize. I did not take Professor Smith's piece as a work of jurisprudential scholarship as much as a cri de coeur about what might meaningful in the field of legal philosophy. That seems to me raises a valid point. On the other hand, I don't have much of an answer for somebody who insists that he or she has an insight into the mind of God on the specifics of His or Her personal intervention into the shaping of the positive law. I don't think that was where Professor Smith was headed, but I do understand concerns around making this a religious exercise, rather than a philosophical debate.

There is, on one hand, a sinister connotation to the word "confidence." It is the source of the term "con
artist." The con artist obtains your confidence under false pretenses and then stings you.

On the other hand, there is a positive connotation to the same word, and it is fundamental to a practicing lawyer's stock in trade - the ability to read people and make judgments about whom to trust. Confidence is crucial to any skill that requires the crossing of a judgmental inflection point. To make a physical analogy, when you ski, you traverse the hill by crossing the fall line, the line by which a ball would roll down the hill. For a moment as you make the turn, you are
potentially in free fall, and the way you succeed to have the confidence to make the turn, cross the fall line, and have the security of a traversed position. To ski very well, you are so confident that you cross the fall line dozens of times in a very short of time and distance. It seems to me anybody who vests trust in another person crosses an equivalent "fall line."

What happens to your self-confidence in the second text when you discover you have been taken - big time - in the first context? I'm contemplating that about myself right now, and some background and thoughts are below the fold.

As I mentioned in a comment to a Rick Garnett post over at PrawfsBlawg a couple days ago, I love mountain climbing stories, even though my acrophobia is so pronounced I can't imagine doing it (yikes, a panic attack at 16,000 feet!). My
first vicarious experience was attending a slide show hosted by one of my college roommates, who climbed Mt. McKinley in the early 1970s as part of the National Outdoor Leadership School. Then one of my former partners from Dykema Gossett in Detroit, Lou Kasischke, was part of the disastrous Everest climb in 1996, and Lou shows up as one of the saner people in Jon Krakauer's Into Thin Air. (Lou sensed trouble lurking, and turned back without reaching the summit, and spent the awful night in his tent, not aware of the problems up above. He did a speaking engagement thing afterwards, in which he talked about the balance of ambition and judgment, particularly when oxygen-deprived at 26,000 feet.)

There's a new book out entitled Forever on the Mountain, by James M. Tabor, about a 1967 climb up Mt.
McKinley in which seven climbers died in an Arctic hurricane. The thesis is that at least one cause of the disaster was the nature of the group itself - a hybrid agglomeration of climbers, most of whom did not know each other before the expedition, and very few of whom acknowledged the leadership of the organizer. In particular, one group of three had merged formally into the larger group of nine, and there was friction between the two group leaders. Think about it: the entire group had only one common goal - to get to the summit and back down. Nevertheless, the expedition was rife with personal agendas, feuds, and a lot of bitching and moaning. One example: some members refused to be on ropes with other members.

I can't really talk about faculties except by hearsay (at this point), and I could make the obvious analogies to fractious partner meetings, but I think I will focus on speed bumps. Some of us here in our little self-governed neighborhood thought there was a significant problem with speeding on the street. The board, which consists of five of the thirty-six homeowners, voted to put in seasonal speed bumps. Nobody likes the speed bumps, but some people really hate the speed bumps. But my favorite response was the person who sent around an e-mail questioning the legal authority of the board to install the speed bumps.

Last March, I mentioned my friend Tom Roberts (right), the lead partner in corporate and M&A at Weil,
Gotshal & Manges in New York. Tom's comments are featured in this week's Time in connection with the tailing off of the hottest buyout cycle in history. Says Tom: "There has never been a buyout market that has been this frothy. [It] looks like it's at the top."

The New York Times has a reproduction of the Declaration of Independence on the back page of the Business section today. It has been a long time (maybe never?) since I tried to read the script itself. I was admiring, among other things, the calligraphy, and thinking about how the calligrapher would have dealt with errors. To my delight, I discovered that there are two insertion carets in this draft, one in the sixteenth line from the top (an "en" was missing from "Representative") and one eleven lines up from the bottom (the word "only" is inserted in front of "by repeated injury").

There's a neat online symposium going on over at Conglomerate. The current entrant in the Conglomerate Junior Scholars Workshop is a paper by Trey Drury (Loyola - New Orleans, left) on Section 102(b)(7) of the Delaware General Corporation Law, and its equivalents in the other
jurisdictions, which limit directors' liability for money damages (but not a limit on injunctive relief or the finding of a breach of duty) with respect to the duty of care.

As anyone who has followed my ramblings (here and at PrawfsBlawg) knows, I am hardly an empiricist. (When I want my dose of philosophical empiricism, I turn to my friend David McGowan at San Diego, who does as good a job channeling Hume as anybody I know!) I have said before that one of the great benefits of being a law professor is the wide brief to be a social philosopher. I think that brief comes with an obligation to be clear about the descriptive, the normative, and the prescriptive, even if it is just to be clear that the descriptive and the normative are difficult to separate. But I do wonder from time to time whether we jump to the prescriptive too quickly (noting that I am sure I have done the same thing).

Professor Drury is far more sympathetic to
the "20-20 hindsight" problem in assessing directors' decisions than
many commentators. Nevertheless, I wondered whether Professor Drury's very interesting and readable paper was a solution in search of a problem, and commented:

I'd be the last person to excoriate exercises in pure reason, but
I'd still like to see some empirical work showing that most of the
current bubble of corporate governance work is something other than the
availability heuristic at work. There are 9,000 publicly traded
companies in the U.S. - is it really the case that 102(b)(7) and its
ilk are a problem for them worth the intellectual energy?

The only empirical work cited in the article (I think) is the
Bradley and Schipani study, which I have not read. I'm skeptical it
supports the claim that directors are "incentivized" to bad behavior,
because just on the description, it sounds to be a macro look at share
prices (and I'd want to dig through the methodology). Assuming it is
methodologically sound, I would think about giving it more airplay at
the outset as the basis for thinking there is a problem, rather than
merely inferring, as a deductive exercise, that 102(b)(7) causes a
problem. The sense otherwise, at least to me, is the hammer in search
of a nail problem.

Of course, it's also possible that I am a spineless, passion-less wimp.

I have recently been dipping my toe into an area that is new to me, and a colleague who I respect as much or more than anyone in the world offered the wise and well-meaning FWIW counsel that this may
be something you don't want to try at home. That may be par for the course in the funny hybrid that is legal academia, and a source of the prevalent (and by no means trivial) sense that "law and . . . " requires a deep level of expertise, if not an advanced degree, in the ". . ." In this particular case, the warning was that the specialists in the particular field believed that attempts to generalize or analogize from the specialty were usually off-base, because you had to be a specialist truly to understand the point, and most non-specialists screwed it up.

That is counsel worth taking to heart, but is it the end of the story? It certainly bespeaks caution, and in my case it was a wake-up to respect the precision of the particular specialty. But I started wondering about several things.

First, I drew on long practical experience to say "I have a natural distrust, born of many years of being a generalist dealing with specialists, of specialists telling me that only specialists can really understand the subject matter of the specialist, but being unable to tell me why because I'm not a specialist." When you are the generalist sitting "atop" an acquisition, for example, it's often the case that you compromise the optimum position in a specialist's area, whether it is real estate, or environmental, or insurance. But it's also possible really to hack up something if you don't understand it - I'm thinking in particular of transitional service agreements that are common when the buyer of a division needs the seller to provide a set of services to the business for a period after the closing. I have seen instances where the generalists did not understand, for example, how the SAP contract allocates "seats", because of insufficient specialized knowledge, with the result that the buyer either ended up paying more to resolve the issue, or simply had no support service.

Second, as to counseling businesses more generally, you can think of a Venn diagram with overlapping circles representing law and business, respectively. My position was always that the lawyers were responsible for understanding the overlap and being able to explain it to the business people. It didn't mean that a lawyer had to be an accountant or a manufacturing engineer, but it meant understanding enough of the cross-discipline to get the overlap right. (Many litigators love being litigators because they have to become "experts" capable of communicating to fact-finders the essence of something as to which they are not experts over and over again.)

Third, I have written before on a Harvard Business Review article from the early 1990s by Womack and Jones, the authors of the classic industrial organization study The Machine that Changed the World, entitled The Myth of the Horizontal Organization. As businesses within diversified corporations became more "empowered" and "decentralized" and "specialized," and the organization got "flatter," the question was who would be responsible for seeing the opportunities that lay between these specialties. By and large, it couldn't be the specialists.

Fourth, there's no question that scientific theories take on an analogized popular meaning. If you say something outside of quantum physics about the Heisenberg Uncertainty Principle, you are probably not talking about issues of particle momentum and position, but instead some kind of polarity in which being precise about one pole means that you cannot be precise about the other. I don't know how nuclear physicists feel about that. Do they just shake their heads and say - "what can you do?" Relativity and Freudian psychology have produced similar effects.

But does that mean the analogies, or the popular sense of the scientific principle, are invalid? Do you have to be an expert in both disciplines to be cross-disciplinary? Am I wrong in saying the great 20th century philosophers of science were not scientists? Do philosophers of science and scientists of philosophy (brain science?) have anything to say to each other? Perhaps a dose of pragmatism is helpful here: if the analogy is useful, regardless of its technical bona fides, then it is worth something.