25 April 2015

The NY Times reports that a 2009 report by inspectors general for five US intelligence and law enforcement agencies indicates that secrecy surrounding the National Security Agency’s post-9/11 warrantless surveillance and bulk data collection program hampered its effectiveness. Many members of the US intelligence community struggled to identify any specific terrorist attacks it thwarted.

A redacted version of the report was released to the Times last week in response to a Freedom of Information Act lawsuit.

The Times indicates that after 9/11 President George W. Bush "secretly told the N.S.A. that it could wiretap Americans’ international phone calls and collect bulk data about their phone calls and emails without obeying the Foreign Intelligence Surveillance Act".

The report

explains how the Bush administration came to tell the chief judge of the Foreign Intelligence Surveillance Court at the time of the Sept. 11 attacks, Royce C. Lamberth, about the program’s existence in early 2002.
James A. Baker, then the Justice Department’s top intelligence lawyer, had not been told about the program. But he came across “strange, unattributed” language in an application for an ordinary surveillance warrant and figured it out, then insisted on telling Judge Lamberth. Mr. Baker is now the general counsel to the F.B.I.

It also says that Mr. Baker developed procedures to make sure that warrant applications using information from Stellarwind went only to the judges who knew about the program: first Judge Lamberth and then his successor, Judge Colleen Kollar-Kotelly.

The White House would not let Judge Kollar-Kotelly keep a copy of a letter written by a Justice Department lawyer, John C. Yoo, explaining the claimed legal basis of the program, and it rejected a request by Attorney General John Ashcroft to tell his deputy, Larry Thompson, about the program.

The report said that the secrecy surrounding the program made it less useful. Very few working-level C.I.A. analysts were told about it. After the warrantless wiretapping part became public, Congress legalized it in 2007; the report said this should have happened earlier to remove “the substantial restrictions placed on F.B.I. agents’ and analysts’ access to and use of program-derived information due to the highly classified status” of Stellarwind.

In 2003, after Mr. Yoo left the government, other Justice Department officials read his secret memo approving the program — most of which has not been made public — and concluded that it was flawed.

Among other things, the report said, Mr. Yoo’s reasoning was premised on the assumption that the surveillance act, which requires warrants for national security wiretaps, did not expressly apply to wartime situations. His memo did not mention that a provision of that law explains how it applies in war: The warrant rule is suspended for the first 15 days of a war.

The report has new details about a dramatic episode in March 2004, when several Justice Department officials confronted Alberto R. Gonzales, the White House counsel at the time, in the hospital room of Mr. Ashcroft over the legality of the program. The officials included Mr. Thompson’s successor as deputy attorney general, James B. Comey, who is now the F.B.I. director, and the new head of the office where Mr. Yoo had worked, Jack Goldsmith.
The showdown prompted Mr. Bush to make two or three changes to Stellarwind, the report said.

But while the report gives a blow-by-blow account of the bureaucratic fight, it censors an explanation of the substance of the legal dispute and Mr. Bush’s changes.
Last year, the Obama administration released a redacted version of a memo that Mr. Goldsmith later wrote about Stellarwind and similarly censored important details.

Nevertheless, it is public knowledge, because of documents leaked by the former intelligence contractor Edward J. Snowden, that one part of the dispute concerned the legality of the component of Stellarwind that collected bulk records about Americans’ emails.
Mr. Snowden’s disclosures included a working draft version of the N.S.A. inspector general’s contribution to this report, roughly 50 pages long. The final document — with many passages redacted as still classified — was part of Friday’s release.
...

The Justice Department created the new type of investigation, initially called a “threat assessment,” which could be opened with lower-grade tips. Agents now use them tens of thousands of times a year.

But little came of the Stellarwind tips. In 2004, the F.B.I. looked at a sampling of all the tips to see how many had made a “significant contribution” to identifying a terrorist, deporting a terrorism suspect, or developing a confidential informant about terrorists.

Just 1.2 percent of the tips from 2001 to 2004 had made such a contribution. Two years later, the F.B.I. reviewed all the leads from the warrantless wiretapping part of Stellarwind between August 2004 and January 2006. None had proved useful.
Still, the report includes several redacted paragraphs describing “success” cases.

For the last two decades, socio-legal scholars have studied the way ordinary people, and predominantly disempowered people, experience and understand law. Only a few studies have focused on the legal consciousness of the upper-middle class or those who hold greater economic, social or symbolic power.

The following dissertation adds to this body of knowledge through an online ethnography of the legal consciousness of the editors of Wikipedia. As the dissertation reveals, legality holds a surprisingly central place in Wikipedia, especially given the expressed rejection of legality in the community’s ethos. Wikipedians manage a complex and delicate system of formal rules and dispute-resolution institutions that extensively use legal vocabulary and rely on the paradigmatic structures and images of national law. The centrality of legality in Wikipedia further poses the question of the interrelations that are created when an egalitarian, open, participatory and ad-hoc community incorporates formality, strict procedures and semi-legal institutions and vocabulary.

24 April 2015

measures of teacher effectiveness with the students’ evaluations for the same teachers using administrative data from Bocconi University. The effectiveness measures are estimated by comparing the performance in follow-on coursework of students who are randomly assigned to teachers. We find that teacher quality matters substantially and that our measure of effectiveness is negatively correlated with the students’ evaluations of professors. A simple theory rationalizes this result under the assumption that students evaluate professors based on their realized utility, an assumption that is supported by additional evidence that the evaluations respond to meteorological conditions.

The authors state

The use of anonymous students’ evaluations of professors to measure teachers’ performance has become extremely popular in many universities (Becker & Watts, 1999). They normally include questions about the clarity of lectures, the logistics of the course, and many others. They are either administered during a teaching session toward the end of the term or, more recently, filled on-line.

The university administration uses such evaluations to solve the agency problems related to the selection and motivation of teachers, in a context in which neither the types of teachers, nor their effort, can be observed precisely. In fact, students’ evaluations are often used to inform hiring and promotion decisions (Becker & Watts, 1999) and, in institutions that put a strong emphasis on research, to avoid strategic behavior in the allocation of time or effort between teaching and research activities (Brown and Saks, 1987 and De Philippis, 2013).

The validity of anonymous students’ evaluations rests on the assumption that, by attending lectures, students observe the ability of the teachers and that they report it truthfully when asked. While this view is certainly plausible, there are also many reasons to question the appropriateness of such a measure. For example, the students’ objectives might be different from those of the principal, i.e. the university administration. Students may simply care about their grades, whereas the university cares about their learning and the two might not be perfectly correlated, especially when the same professor is engaged both in teaching and in grading. Consistent with this interpretation, Krautmann and Sander (1999) show that, conditional on learning, teachers who give higher grades also receive better evaluations. This finding is confirmed by several other studies and is thought to be a key cause of grade inflation (Carrell and West, 2010, Johnson, 2003 and Weinberg et al., 2009).

Measuring teaching quality is complicated also because the most common observable teachers’ characteristics, such as qualifications or experience, appear to be relatively unimportant (Hanushek et al., 2006, Krueger, 1999 and Rivkin et al., 2005). Despite such difficulties, there is evidence that teachers’ quality matters substantially in determining students’ achievement (Carrell and West, 2010 and Rivkin et al., 2005) and that teachers respond to incentives (Duflo et al., 2012, Figlio and Kenny, 2007 and Lavy, 2009). Hence, understanding how professors should be monitored and incentivized is essential for education policy.

In this paper we evaluate the content of the students’ evaluations by contrasting them with objective measures of teacher effectiveness. We construct such measures by comparing the performance in subsequent coursework of students who are randomly allocated to different teachers in their compulsory courses. We use data about one cohort of students at Bocconi University – the 1998/1999 freshmen – who were required to take a fixed sequence of compulsory courses and who where randomly allocated to a set of teachers for each of such courses.

We find that, even in a setting where the syllabuses are fixed and all teachers in the same course present exactly the same material, professors still matter substantially. The average difference in subsequent performance between students assigned to the best and worst teacher (on the effectiveness scale) is approximately 23% of a standard deviation in the distribution of exam grades, corresponding to about 3% of the average grade. Moreover, our measure of teaching quality is negatively correlated with the students’ evaluations of the professors: teachers who are associated with better subsequent performance receive worst evaluations from their students. On the other hand, teachers who are associated with high grades in their own exams rank higher in the students’ evaluations.

These results question the idea that students observe the ability of the teacher during the class and report it (truthfully) in their evaluations. In order to rationalize our findings it is useful to think of good teachers – i.e. those who provide their students with knowledge that is useful in future learning – as teachers who require effort from their students. Students dislike exerting effort, especially the least able ones, and when asked to evaluate the teacher they do so on the basis of how much they enjoyed the course. As a consequence, good teachers can get bad evaluations, especially if they teach classes with a lot of bad students.

Consistent with this intuition, we also find that the evaluations of classes in which high-skill students are over-represented are more in line with the estimated quality of the teacher. Additionally, in order to provide evidence supporting the intuition that evaluations are based on students’ realized utility, we collected data on the weather conditions observed on the exact days when students filled the questionnaires. Assuming that the weather affects utility and not teaching quality, the finding that the students’ evaluations react to meteorological conditions lends support to our intuition. Our results show that students evaluate professors more negatively on rainy and cold days.

There is a large literature that investigates the role of teacher quality and teacher incentives in improving educational outcomes, although most of the existing studies focus on primary and secondary schooling (Figlio and Kenny, 2007, Jacob and Lefgren, 2008, Kane, and Staiger, 2008, Rivkin et al., 2005, Rockoff, 2004, Rockoff and Speroni, 2010 and Tyler et al., 2010). The availability of internationally standardized test scores facilitates the evaluation of teachers in primary and secondary schools (Mullis et al., 2009 and OECD, 2010). The large degree of heterogeneity in subjects and syllabuses in universities makes it very difficult to design common tests that would allow to compare the performance of students exposed to different teachers, especially across subjects. At the same time, the large increase in college enrollment occurred in the past decades (OECD, 2008) calls for a specific focus on higher education.

Only very few papers investigate the role of students’ evaluations in university and we improve on existing studies in various dimensions. First of all, the random allocation of students to teachers differentiates our approach from most other studies (Beleche et al., 2012, Johnson, 2003, Krautmann and Sander, 1999, Weinberg et al., 2009 and Yunker and Yunker, 2003) that cannot purge their estimates from the potential bias due to the best students selecting the courses of the best professors. Correcting this bias is pivotal to producing reliable measures of teaching quality (Rothstein, 2009 and Rothstein, 2010).

The only other study that exploits a setting where students are randomly allocated to teachers is Carrell and West (2010). This paper documents (as we do) a negative correlation between the students’ evaluations of professors and harder measures of teaching quality. We improve on their analysis in two important dimensions. First, we provide additional empirical evidence consistent with an interpretation of such finding based on the idea that good professors require students to exert more effort and that students evaluate professors on the basis of their realized utility. Secondly, Carrell and West (2010) use data from a U.S. Air Force Academy, while our empirical application is based on a more standard institution of higher education. The vast majority of the students in our sample enter a standard labor market upon graduation, whereas the cadets in Carrell and West (2010) are required to serve as officers in the U.S. Air Force for 5 years after graduation and many pursue a longer military career. There are many reasons why the behaviors of both teachers, students and the university/academy might vary depending on the labor market they face. For example, students may put higher effort on subjects or activities particularly important in the military setting at the expenses of other subjects and teachers and administrators may do the same.

More generally, this paper is also related and contributes to the wider literature on performance measurement and performance pay. One concern with the students’ evaluations of teachers is that they might divert professors from activities that have a higher learning content for the students (but that are more demanding in terms of students’ effort) and concentrate more on classroom entertainment (popularity contests) or change their grading policies. This interpretation is consistent with the view that teaching is a multi-tasking job, which makes the agency problem more difficult to solve (Holmstrom & Milgrom, 1994). Subjective evaluations can be seen as a mean to address such a problem and, given the very limited extant empirical evidence (Baker et al., 1994 and Prendergast and Topel, 1996), our results can certainly inform also this area of the literature.

The paper is organized as follows. Section 2 describes the data and the institutional setting. Section 3 presents our strategy to estimate teacher effectiveness and shows the results. In Section 4 we correlate teacher effectiveness with the students’ evaluations of professors. Robustness checks are reported in Section 5. In Section 6 we discuss the interpretation of our results and we present additional evidence supporting such an interpretation. Finally, Section 7 concludes.

23 April 2015

Privacy law has languished for decades while the other information law doctrines have flourished. This paradox can be explained by the relative weight assigned respectively to moral argument versus economic argument.

Privacy law is unique in that it continues to be steered foremost by moral intuition. What qualifies as a “violation” of privacy is predicated largely on the moral reprehensibility of the act in question. By stark contrast, the intellectual property regimes have long since converted to being led primarily by economic considerations, and only secondarily by non-economic factors.

That distinction is counterproductive and nonsensical. Personal data is an informational good like any other. The same economic justifications for intellectual “property” can be extended to intellectual “privacy” — nonexclusivity harms the incentives to generate new information that can further the progress of social knowledge.

Where moral rhetoric has failed to advance robust recognition of privacy interests, economic reasoning may prove more effective. In particular, this Essay offers Edmund Kitch’s prospect theory as an important counterweight to prior economic critiques of privacy, which have frowned on restraints on alienation of information. Prospect theory shows that the social value of recognizing exclusive claims is not just to shield information that already exists, but also to shield deeper investigations of that information to unearth further information that would not be otherwise discoverable.

'Beyond Personhood: From Two Conceptions of Rights to Two Kinds of Right-Holders' by Tomasz Pietrzykowski comments

The debate between so called interest and will theories of rights is long and well known. I argue that it respect of legal rights it is plausible to claim that there are just two different kinds of normative situations created by rules of law. One of them corresponds to "Interest Rights" while the other to "Choice Rights". Moreover, there are essentially different conditions of plausible ascription of each of those kinds of rights. In view of that I suggest that two kinds of right-holders should be distinguished – creatures able to hold only elementary (interest) rights and those apt to possess personal (choice) rights too. The first category includes all sentient creatures (as non-personal subjects of law) while the other refers to beings possessing qualities of a person. However, extending the scope of legitimate right-holders should not result with any regress in the present level of legal protection enjoyed by human beings. In order to avoid risk of such regress I develop the idea of modestly specist approach to personhood applying different criteria of subjecthood and personhood to human and non-human creatures.

Pietrzykowski states

claims to recognize non-personal subjecthood or personhood of non-human creatures should by no means imply depriving any human being of her status. Our fallibility proved by the long list of past scientific and moral misconceptions that have led to abhorrent social and legal practices should prevent us from too easy adoption of any views that could pose any danger of their return.

As a result, I am inclined to think that a reasonable approach to the reconsideration of moral foundations of personhood in law should remain modestly specist. Deeply reformed and non-exclusive version of humanistic specisism I have in mind allows for some relative favor in the treatment of human beings, albeit by no means excludes granting of elementary or even personal rights to non-human creatures. Contrary to radically specist humanism it does not regard human good as practically the only legitimate goal of the law (or at least strongly preempting any other considerations). There is no metaphysical property making all and only members of the human species intrinsically more valuable than any other actual or possible creatures in the world. We (nor anyone else) do not occupy exclusive, superior position in the moral universe justifying absolute priority of our interests over all others.

Moreover, there is no essential connection between humanness and holding elementary rights. Principally, each creature that is able to have subjective interests is a potential candidate to hold some elementary rights (insofar as its interests may deserve protection by means of imposing legal duties on others). Nor is there an essential connection between humanness and personhood. There is only an empirical question whether any non-human creatures posses mental qualities sufficient to have been plausibly conferred with personal rights (and not only elementary rights). At the moment, the answer to this empirical question still seems negative. Nonetheless, there is nothing absurd or even unrealistic by considering creatures that do not meet (or only partially meet) biological criteria of humanness, but possess capacities enabling them to have many kinds of personal rights.

On the other hand, modest specism accepts that there may be different criteria applicable to decide about appropriate legal status of human and non-human creatures. The legal status of non-human creatures (animals or human-animal hybrids, chimeras, artificially intelligent cyborgs, etc.) should be principally dependent on their actual mental capabilities. Those which are sentient may deserve the status of subjects of law (that is, some of their subjective interests may become legally protected as elementary rights). Their status as persons in law would require evidence that they are able to develop actual mental capabilities enabling them to hold and exercise personal rights

It is, however, hardly acceptable to apply the same criteria to human creatures. It would result in a serious decrease in level of their legal protection. Many human beings, e.g. newborns or adults with severe mental dysfunctions would have to be regarded as non-personal subjects of law rather than persons whose rights are exercised by competent custodians (as is the case today). Such conclusions, however, may and should be avoided. The evolution of moral underpinnings of the law should lead to extending rather than shrinking the circle of legal recognition. The concept of non-personal subjecthood may be useful to upgrade the status of the borderline entities, which today are either in an undefined grey area or actually remain deprived of any legal protection whatsoever. In particular, it may concern embryos, pre-implanted zygotes, anencephalic newborns, or human organisms with severely damaged neural structures making them irreversibly unconscious. The intermediate concept of non-personal subjects of law could help to work out more refined and balanced solutions to the problems arising in respect to such kinds of subjects, that seem to not fit either to the category of full persons or mere things.

Regarding human personhood in law, the criteria applicable to human beings should at least allow all of them to count as persons from birth to death (including mentally ill or demented individuals). Thus, the basis to grant personhood to human beings should not be the possession of actual mental capacities sufficient to exercise personal rights. Contrary to other creatures, to be regarded as persons human beings need only sentience. For a human organism to qualify as a non-personal subject of law it has to reveal sole potential for developing sentience or to have been a sentient creature in the past.

The Age reports that the Royal Commission into Trade Union Governance and Corruption has

demanded access to 80,000 building workers' private details and the names of anyone who has attended a shop stewards' meeting in Victoria.

The Construction, Forestry, Mining and Energy Union (CFMEU) has reportedly characterised the orders as "draconian" and as a major privacy breach that could be used to blacklist union members from future construction projects.

The orders, apparently sent to CFMEU branches this month, order the union to provide member details, including member residential addresses. The orders were revised last Friday to allow the omission of some details.

The Fair Work Building and Construction (FWBC) - the construction sector industrial relations inspectorate - is reported as stated that it is was not privy to documents obtained by the Royal Commission and would never undermine someone's freedom of association -

It is farcical to suggest we would be breaking the very laws we so vigorously enforce

The agency currently has several investigations into alleged freedom of association breaches.

In March the CFMEU was found guilty of contempt over a blockade at the Bald Hills Wind Farm Project with cars and barbeque trailers for about eight hours in April 2014, with a fine of $125,000 and indemnity costs. The CFMEU, its Victorian Assistant Secretary Shaun Reardon and former official Danny Berardi were fined $43,000 for attempting to coerce a head contractor into signing an enterprise agreement with the CFMEU.
FWBC filed contempt of court charges against the CFMEU after it breached a court undertaking by blockading the

WA Today reports that a WA police officer has been charged over disclosing information about Ben Cousins to the officer's friend, who is a journalist. It's a somewhat muddled report -

Seven News reporter Monique Dirksz, who was in a relationship with a police officer charged with disclosing secret information to her about Ben Cousins.

A policeman has been charged over leaking confidential information about the arrest of Ben Cousins to a Channel 7 Perth news reporter he was in a relationship with.

The 29-year-old first class constable was stood down immediately after the incident, amid an investigation by the Internal Affairs Unit.

On Tuesday, Constable Jamieson was charged with four counts of disclosing official secrets. He will appear in Perth Magistrates Court on May 5.

Acting Police Commissioner Steve Brown said police would allege the information divulged by the officer was "not within the remit" of his work "and that it gave an advantage to that particular journalist".

...
Mr Brown said the tip-off allegedly provided to Ms Dirksz had led her to be outside Fremantle Police Station at 2am - the exact time Cousins was released on bail from the station. His exit was captured on camera, with Channel 7 running "exclusive" vision and Ms Dirksz participating in multiple radio interviews the next day.

Mr Brown told 6PR Radio that 112 police officers, including some in regional WA, had accessed Cousins' police record 300 times in two days through the computer aided dispatch system. Information was also accessed about former West Coast Eagle Daniel Kerr.

"About half are going to clearly be what has been described as professional curiosity and we agree with that. Those officers have absolutely nothing to fear," Mr Brown told 6PR Mornings host Gary Adshead.

Secret, confidential, professional curiosity ...

The report states -

"The remaining half we are still working through to try and identify why they would have accessed that record. From where the investigation is currently at, those officers weren't working at the time or weren't working in close proximity. They had no need – 10 of them or thereabouts were working in regional Western Australia."

Nine of the 112 have been previously disciplined over similar breaches, he said.

"We want to know why they have accessed these records. It's a breach of trust."

Some curiosity kills some cats, it seems, although recurrent breaches might raise questions about the effectiveness of the previous discipline.

Brown is reported as indicating -

advice had suggested that Ms Dirksz had done nothing wrong and no criminal charges had been laid against her.

"We expect that journalists right across the sector have people that they speak to. They'll source information about what's happening... But for the police, from an agency perspective, we're talking about officers here, or an officer, who disclosed information they didn't have the right to do," he said.

"It is an absolute breach of trust not only for this agency but for the community at large. The community needs to know that police officers are looking after the integrity of the information."

In Victoria the Herald-Sun reports on plans to equip the state's 220 highway patrol cars with ANPR cameras (at a cost of $86 million) "to spy on errant drivers, bikies, and suspected terrorists".
The characteristically breathless report, based on an 88 page report by Deloitte accessed under FOI, indicates that the cameras would

Linking the cameras to a central unit sharing, sorting and storing footage could also help police track vehicles associated with known terrorists, outlaw bikies, burglars, sex offenders and arsonists, it says.

The cameras, which can scan and record thousands of numberplates a minute and check them against vehicle, criminal and sheriff’s office records, could gather intelligence on “persons of interest” and identify patterns of behaviour and relationships.

Consistent with the usual rhetoric - and ignoring cautions such as those noted here and here - the report

warns that Victoria’s failure to formally adopt ANPR technology — all other states have done so — is hampering law enforcement and efforts to cut the road toll.

Deloitte says the force lacks an advanced intelligence capacity to find vehicles of interest, and correlate their movements, from among hundreds of thousands of numberplates, times and places that would be captured daily.

It recommends a staged rollout, so the required infrastructure can be built and any necessary changes to privacy laws can be debated.

VicRoads estimates 38,000 unlicensed drivers take to the state’s roads every day, and on average one is involved in a fatal crash every fortnight.
If the cameras were fitted to all 220 highway patrol cars, Deloitte estimates an additional 120,000 unregistered cars a year and nearly 66,000 dodgy drivers would be caught.

Deloitte found each of five pilot ANPR units fitted to police cars was detecting an average of 53 unregistered vehicles and 33 unlicensed, disqualified or suspended drivers a month, compared with just seven vehicles and 14 drivers for regular highway patrols.
They scanned more than four million numberplates in the nine months to last October, detecting 84,000 unlicensed drivers and 53,000 unregistered vehicles.

'Post-Critical China: There is no Author, just Content!' by Christopher Brisbin, in critic|all
I International Conference on Architectural Design & Criticism (2014) asks

do we critique contemporary creative works or speculations that claim of themselves no inherent
critical function—that claim to transcend any conscious act of criticality—works that are apparently
nothing more than outcomes of programmatic and commercial pressures placed upon them? In the
1990s, architects Robert Somol, Sarah Whiting, Michael Hays, and Rem Koolhaas identified the
emergence of a new kind of critical practice in Western Architecture that transcended the dogmatic and
labored theoretical pursuits of the preceding anti-humanist theories of the 60s, 70s and 80s. Liberated
from the esoteric self-indulgencies of trans-disciplinary high theory, post-critical architecture sought to
reassert its explicit disciplinary knowledge and expertise and absolve itself of any overt critical
functioning, however its critical functioning is not so easily erased, especially in the contemporary
architecture of China. The paper complexifies the commonly accepted definition of post-criticality as
uniformly uncritical, suggesting that, whilst pretending to be neutral, post-critical works, such as those of
exemplary post-critical artist Michael Zavros, are in fact—by the very condition of their active claim of
opposition to avant-garde critical practices—"inherently political and partisan.” The paper concludes
that the acquiescence from criticality itself therefore becomes a form of critical action and “conformist
non-conformity” that demonstrates the inherent potential to re-purpose the operative mechanisms of
the neo-liberal political economy to an artist’s or architect’s own post-critical and creative ends. Whilst
written from a Western perspective, the paper reflects on the cultural intersection of East and West that
is manifest in recent counterfeit architectural projects in China. As China attempts to reconcile its own
political ideologies of hierarchical Communism with the economic structures of free-market Capitalism, it
facilitates the ultimate transgressive act; consuming the post-critical ruins of the West and regurgitating
them anew as a new form of global criticality. Perhaps we have more to learn from China then they do
from the West. In the East, there is no author, just content.

Brisbin comments

Whilst the literature in Art, Architecture, and Philosophy generally indicts the post-critical as explicitly
‘un-critical’ in its ideation and inert of any deliberate critical functioning, it is the aim of the essay to
argue that post-critical work, both within art and architecture, is inherently critical; not through its
conscious application, but through the instrumental analysis of its unconscious application of the
capitalist market-system that is its supposed author. In so doing, the essay aims to therefore
demonstrate ways in which a post-critical position can be re-framed as a critical lens through which to
reflect upon the work’s socio-political and socio-economic context. In particular, the essay explores the
affect of the affirmational practices of Chinese Capitalism as a means through which to cast a mirror
upon the world of the narcissistic consumptive practices of glamour and allure that is fueling the
feverous ‘status consumption’ of Western luxury brands and architecture by the swelling Chinese
middle-class. The essay therefore proposes that China requires a post-critical reading in the form of a
‘critical u-turn’ so as to cast a critical gaze upon the affects of Chinese growth and the quasi-oligarchical
rule of the People’s Republic of China.

As Jean-Francois Lyotard has identified in his critique of the postmodern condition, it is not theory that
binds the apparent randomness of postmodern plurality, it is Money. The cultural value associated with
the artistic practices of Art and Architecture is replaced by an ontologically flat conception of the world
as no greater than its commodified market value. Artworks are no longer valued for their pleasure or
affect, nor are houses valued as culturally enriching ‘homes’: they are quantifiable and measurable
financial investments. Critical theory’s esoteric indulgences isolated its audience and diluted its cultural
relevancy to the point which society demanded an affirmational quality of its Art; a sense of familiarity
that subverted any productive critique of the systemic mechanisms that led to its benign cultural
relativism; or its subsequent celebration of increasingly narcissistic notions of beauty and taste and the
predicable affects these notions had on the composition of the built environment.

As Lyotard observes:
“Eclecticism is the degree zero of contemporary general culture: one listens to reggae, watches a
western, eats McDonald’s food for lunch and local cuisine for dinner, wears Paris perfume in Tokyo and
“retro” clothes in Hong Kong; knowledge is a matter for TV games.” Every aspect of our consumer life
is thus tainted with contradictions of cultural authenticity, fused together as a lattice of simulacra of
exotic places, experiences, and identities. It is this cultural branding that Nigel Thrift argues has
manifested in contemporary culture as a ‘glamorous celebrity sign system’ in which the legible aesthetic
signature of branded architects and artists has become paramount in achieving the transformation of
inanimate objects into sites of lust, desire, and status, which are all fundamental to the effective
functioning of the market system.

This overarching financial force resulting in, as Leonard further observes, “some prominent art …
[preferring] instead to be appealing, entertaining and affirmative.” They elevated themselves, through
the direct engagement and manipulation of the fundamental economic practices of supply and demand,
to the status of a brand. In order to explain this shift, Leonard cites art historian Rex Butler’s indictment
of the ‘artist as brand’ as a ‘post-critical’ turn that can be understood as an irreconcilable outcome of our
consumerist age and, as I argue, a fundamental reflection of the thirst for familiarity in the milieu of
semiotic and informational saturation that is emblematic of the age. This definition of the post-critical is
applicable more broadly than just within an art practice or a theoretical pursuit.

China offers, I argue, the most extreme post-critical cultural context. China has appropriated the
architecture of the West vehemently embracing them as accoutrement of modernization and social
status. To promote the image of success in China apparently means to look like, and dwell in, the icons
of Western modernity; such as reproductions of Le Corbusier’s Ronchamp (1954) in Zhengzhou in
2004; reproductions of the architecture and engineering of Haussmann’s nineteenth-century Paris in the
ironic montage of Tianducheng’s 2007 Eiffel Tower copy with surrounding baroque cityscape buildings
(Fig. 1); or, more recently, reproductions of Zaha Hadid’s SOHO shopping complex in Beijing (2011–14)
in the Meiquan 22nd Century building in Chonjing (2012) (Fig. 2), to name but a recent. As a western
onlooker, what has become curious is the seemingly unquestioning belief in the right to the production
and consumption of counterfeit goods. It is an economy that is fundamentally built upon a culture of
appropriation of the best that Western Architecture has to offer without any approval or
acknowledgement of copyright ownership. Architect Rem Koolhaas observes that the sheer speed of
development and rapid growth in China creates a kind of design practice based in the art of collage and
figurative appropriation; a Photoshop-based architect whom practices a kind of aesthetic and
compositional design process that resembles Photoshop montage. The socio-political power structures
that are embedded within these Western exemplars are effectively reduced to a form of image-based
collage as any uniform meaning carried by the semiotic language of its collaged elements is usurped by
aesthetic conditions of familiarity and affirmation.

It is important to acknowledge that this is not necessarily a new phenomenon in China. China has a
long-standing history of cultural appropriation and consumption that has privileged the collective over
the individual. Ideas are not perceived to be owned by an individual, they are owned by and shared with
the collective.9 Sharing takes the form of both conscious and deliberate contributions in order to benefit
the whole, and also an acknowledgement that work generated within China is openly accessible to
appropriation and re-fashioning to ends never intended by its authors. This is at odds with Western
concepts regarding the individual rights of the designers of artifacts. They cannot be replicated/copied
without the permission of their author. This clash of Western and Eastern culture produces a vastly
different cultural perception as to what it means to copy another’s work. Whilst Chinese Law closely
mirrors ‘in-principle’ author rights and copyright conditions that we are familiar with in the West, these
contrasting social perceptions about the Chinese right to copy continue to linger today. Whilst much has
been written in both the academic domain and in the popular press about China’s meteoric economic
rise and the cultural factors directly affecting the consumption of counterfeit goods, there is little
literature accounting for broader cultural attitudes that may offer an alternative perspective as to why the
Chinese appear so relaxed in their attitudes towards the copying of Western architecture.

... The
display of luxury denotes economic, and thus, social and familial success in China.

This phenomenon is fundamental to Chinese culture and is widely applied as a form of demonstrable
social and economic conformity by the middle-class through the collective consumption of widely
recognizable Western aesthetic styles, brands, and architecture. The consumption and display of such
objects, through their desire for, and consumption of, fashion, jewelry, and homes, aims to deliberately
promote the social status of the middle-class, and demonstrate their good judgment and understanding
of socially defined notions of ‘taste’. Thus, as Kant observed of the emerging middle-class of Europe in
the eighteenth-century, a citizen is able to denote their social standing by demonstrating their
knowledge of the limits and boundaries of acceptable ‘taste’, and thus be assimilated within that socioeconomic
grouping. Whilst drawn from a Western definition of taste, its application still applies today in
China; however, the Chinese are more susceptible to the social pressures that arise in maintaining the
aura of their social status than their Western counterparts, expressing a “need to identify with or
enhance [their] image in the opinion of significant others through the acquisition and use of products and
brands [and] the willingness to conform to the expectations of others regarding purchase decisions.” ...

There is little academic literature accounting for the widespread copying of Western architecture in
China, aside from Bianca Bosker’s Original Copies, Architectural Mimicry in Contemporary China
(2013), which primarily examines Chinese architectural copying from the perspective of the conceptual
relationship between the authentic original and the nature of the Baudrillardian ‘simulacra’ that is
present in the copy. However, much can be learnt from the research that has been conducted into the
prolific counterfeiting of movies and computer games in particular, in order to perhaps better understand
the motivations of the Chinese in their architectural piracy activities. ...

[E]ven architecture can become a commodification of the consumerists brands it aims to house. Rem
Koolhaas, Zaha Hadid, and Michael Zavros are recognizable International brands that embody lifestyle,
success, and luxury. In Koolhaas’ Prada New York (2001), the architecture becomes an objectified
‘spatial version’ of the Prada brand that brings into being the aesthetic ‘excesses of exclusivity’ that such
luxury brands embody in China. They represent a culture of desire and lust that is deferred through the
symbolic allure and promise of lifestyle and taste that is presented by the star’s associations with the
work they produce. It is no wonder that in the copying culture of China, the work of architect brands are
reproduced so openly.

For Baudrillard, aesthetics and economics are ontologically reified as a “single cultural form that
becomes the essence of the consumer society.” The irony is that as artworks critically react to market
forces placed upon them within the dominant Western economic system of Capitalism, they are
subsequently consumed by the very system that they seek to disrupt. For Firat and Venkatesh, there
are only two possibilities for cultural critique in this capitalist framework: re-appropriation or
marginalization. It is either subsumed into the market, or it is marginalized by it, to the point that it is no
longer relevant. Faced with the undeniable agency of post-critical architecture’s causal effects, postcritical
architecture can be argued to be far more critical than the traditional criteria for asserting
criticality would infer: “the post-critical turn is not the rise of uncritical approaches to art, but a
reconsideration of what it means to be critical.” As Diana Mihai has observed, “[p]ost-critical
architecture pretends to be politically neutral/post-critical and rejects social critique, but the fact that it is
modeled on contemporary business practices and market systems renders it inherently political and
partisan.” As such, architecture, when conceived of as an edifice of cultural values and principles,
cannot be easily be differentiated from its economic and political symbolism—even when its voice is left
relatively mute by repressive cultural forces, such as those in China.

It would appear that the Chinese middle-class—perhaps even more than their American counterparts—
desire an art and architecture that is affirming, and that does not challenge the overarching power
structures of (for the Chinese) ‘Commu-Capitalism’. As Chinese contemporary artists Chen Danqing
notes: they wish to be “an artist within the system”, not against it! Architectural examples such the
Beijing National Stadium (2003-8), known popularly as the Bird’s Nest, deploys symbols of Chinese
culture that are familiar to the West, but say nothing new and are inert of any active contemporary
voice. They are simply propaganda expressions that ‘stage Chinese-ness’. Its audience is the West,
not the Chinese themselves. When Australian architects Ashton Raggatt McDougal say ‘sorry’ in braille
above the Garden of Australian Dreams in the National Museum of Australia (2001), barely a ripple of
discussion or cultural debate was catalyzed in the Australian media acknowledging Australia’s highly
contentious history of mis-treatment of indigenous Australians. When Chinese contemporary artists Ai
Weiwei said sorry on behalf of a grieving Chinese nation for the unacknowledged loss of 9000 children
in the 2008 Sichuan earthquake, he was ‘detained’ for ninety-one days by the Chinese government. ....

A ‘post-critical u-turn’ is required to ensure a critical engagement with the “present imbalances in power
and opportunity” that Architecture so often ignorantly reinforces. ... As China attempts to reconcile its own political ideologies
of hierarchical Communism with the economic structures of free-market Capitalism, it facilitates the
ultimate transgressive act; consuming the ruins of the West and regurgitating them anew as a new form
of global post-criticality. In China, there is no author, just content subsumed from the West. Cultural
artifacts are therefore interpretations and translations of what the artist and/or architect see in their
world, regardless of whether their author intended then to be explicitly critical or not. These creative
artifacts always possess a critical voice—the question is whether we have the fortitude and resilience to
objectively listen to what they critically have to say about us.
So what do the copied art/architecture works in China say? In reflecting on the deconstructive lens
through which the paper has attempted to trace contrasting cultural meanings of copying, copyright, and
criticality in China, the discursive zeal and indictment of ‘counterfeit’ copying has perhaps naively led to
a negative and Western-centric inference that China should adopt the West’s ideological attitudes to
copyright as a basis for Capitalism. ... What we can yield from much of China’s contemporary art and
architecture is that it engenders a kind of complicit-ness in/to consumer culture that in turn engenders
the work with a “conformist non-conformity.” As Virginia Postrel astutely observes, “form follows
function” has been effectively usurped by “form follows emotion,” but is this outcome such a bad thing?

'Legal Obligations of States Directly Affected by Cyber-Incidents' by Oren Gross in (2015) 48 Cornell International Law Journalcomments

Much has been written in recent years about cyberspace as a new domain for warfare. The magnitude of the threats cannot be underestimated. Cyber attacks can disable whole countries (e.g., Estonia) as well as companies (e.g., Sony) and cyber-security incidents in sectors such as communications, finance, transportation and utilities can have catastrophic consequences.

The discussion to date has tended to focus on two common conceptions. First, regardless of the failure to arrive at widely accepted definitions of terms such as cyber “crime,” cyber “espionage,” cyber “attacks” and cyber “warfare,” they have mostly been regarded as willfully perpetrated, pre-meditated and intentional. Second, existing literature (certainly legal literature) has focused exclusively on the legal obligations of, and possible sanctions against, states and non-state actors that orchestrated cyber attacks.

In this article I offer radically different perspectives on both counts. First, the article recognizes that the harm to both computer networks and physical systems interconnected with them may be as catastrophic when the source of damage is not intentional but rather the result of human error or conventional threats. Second, I offer the first exploration and analysis of possible obligations that may be imposed not on the state (or non-state actor) that originated the attack, but rather on the directly affected state, i.e., the state that is the target of the attack or the cyber incident. I argue that imposing legal and technological responsibilities on the state that has been exposed to a cyber incident is warranted both as a matter of conceptualizing state sovereignty and due to the state’s various obligations to other states and the global community.

Thus, the article canvasses the possible bases for, and scope of, responsibilities that may be borne by states that are directly affected by cyber-security incidents before, during and after a cyber-security incident materializes.

'Data Breach (Regulatory) Effects' by David Thaw in (2015) Cardozo Law Review de Novocomments

Breach notification laws have been a major driver of data protection efforts in U.S. organizations for over a decade. This form of disclosure-based regulation exists in 47 of 50 U.S. states, as well as four other U.S. jurisdictions, but has yet to be adopted as a law of general applicability at the Federal level.

This Essay considers the effects the structure of existing disclosure-based cybersecurity regulation has on the efficacy of U.S. firms' cybersecurity measures. Drawing on previous empirical work and analysis of firm incentives, it suggests two modest conclusions about the most efficacious legal structures: 1) that any disclosure-based regulation should be part of a broader cybersecurity regulatory framework; and 2) that any risk-of-harm threshold triggering notification should bear a presumption in favor of notification. Based on these conclusions, I suggest a preliminary regulatory prescription for policymakers considering adoption or standardization of disclosure-based regulation in the data protection context.

22 April 2015

'From Big Law to Lean Law' by William D. Henderson in (2014) 38(Supp) International Review of Law and Economics 5 comments

In a provocative 2009 essay entitled The Death of Big Law, the late Larry Ribstein predicted the shrinkage, devolution, and ultimate demise of the traditional large law firm. At the time virtually no practicing lawyer took Larry seriously. The nation's large firms were only one year removed from record revenues and profits. Several decades of relentless growth had conditioned all of us to expect the inevitable rebound. Similarly, few law professors (including me) grasped the full reach of Larry's analysis. His essay was not just another academic analysis. Rather, he was describing a seismic paradigm shift that would profoundly disrupt the economics of legal education and cast into doubt nearly a century of academic conventions. Suffice to say, the events of the last three years have made us humbler and wiser.

This essay revisits Larry's seminal essay. Its primary goal is to make Larry's original thesis much more tractable and concrete. It consists of three main pillars: (1) the organizational mindset and incentive structures that blinds large law partners to the gravity of their long-term business problems; (2) a specific rather than abstract description of the technologies and entrepreneurs that are gradually eating away at the work that has traditionally belonged to Big Law; and (3) the economics of the coming “Lean Law” era. With these data in hand, we can begin the difficult process of letting go of old ideas and architecting new institutions that better fit the needs of a 21st century economy.

Henderson notes that Ribstein, in writing on the legal profession and lawyer regulation, considered

how the legal ethics rules, through bans on noncompete agreements and nonlawyer investment, were limiting the ability of lawyers to create new forms of organization that would facilitate optimal levels of risk sharing and innovation. Larry advanced these arguments long before the first signs of trouble. When the large corporate law firms were finally showing signs of stress, Larry reviewed the evidence. He concluded that most large law firms were evolving into highly inefficient, sprawling structures that worked to the benefit of individual lawyers and, as a result, were hollowing out the very mechanisms needed to strengthen and grow the organization. He also took stock of broader trends affecting the market for corporate legal services. The clients were wising up. Moreover, they had other options. Playing out the logical next steps, Larry confidently pronounced The Death of Big Law.

As someone who closely follows the legal market and talks regularly with a wide range of lawyers, I can say with confidence that three years after the publication of The Death of Big Law, Larry's thesis is not widely accepted, let alone understood, by most large law firm lawyers. Yet, for reasons entirely rooted in self-interest and survival, it ought to be. By extension, legal education, which over the last several decades became heavily dependent on the fortunes of Big Law, also needs to grapple with the Larry's core message of value creation.

The purpose of this essay is to move from the plane of high theory, where Larry Ribstein was a virtuoso, to the ground floor of practical application, where law firm leaders and educators have to assess myriad messy facts and make decisions about what to do next. Big Law is not dead—Larry was trafficking in metaphor—but it has plateaued. It is also losing market share. This creates an environment of uncertainty that is rarely acknowledged by law firm leaders and legal educators. Something new is going to gradually supplant, or at least rival, Big Law; and as a practical matter, none of us really know what it is going to look like. In times of massive structural shift, strategy is little more than an informed guess. Yet, such an approach is more likely to be successful than one that relies on false, outdated assumptions.

Drawing upon Larry Ribstein's insights and some more recent market data, I will attempt to draw a more concrete picture of the state of Big Law, the evolving market for corporate legal services, and how the future might unfold. This Essay is organized in three sections. Section 1 is a summary of Larry's primary critique of Big Law. Section 2 adds color and concrete detail to Larry's prediction by examining recent trend data for both large law firms and non-law firm competitors that Larry's predicted would grow at Big Law's expense. Although Big Law remains big, it appears to be losing market power. Further, Larry may have underestimated the dynamism of nonlawyer entrepreneurs operating in the legal industry and overestimated the need for regulatory changes to spur innovation. The breadth and depth of change is very large and gaining momentum. Section 3 outlines the prevailing economic conditions of the post-Big Law period—what I refer to as Lean Law.

1. Ribstein's critique of Big Law

What do large law firms produce that is distinct and apart from the legal work performed by partners, who own the firm, and their lawyer employees? According the Larry Ribstein, the most persuasive explanation was reputational bonding. Lawyers are in a better position than clients to evaluate the skills, integrity, and work ethic of other lawyers. Therefore, highly capable lawyers have a strong incentive to organize themselves into firms, not only to provide more specialized services to clients—which clients surely need—but also to erect a screen to filter out less able or trustworthy lawyers. Over time, the firm earns a reputation for skillful lawyering and excellent client service. That reputation has positive value that enables a firm to charge premium fees.

Yet, the Ribstein critique also points out that the success of the large firm also gives rise to opportunistic behavior by individual lawyers. Once the firm's vaulted reputation is in place, partners may be able to make more money by focusing on their own client relationships and giving short shrift to activities that would preserve and grow the firm's reputational capital (e.g., training and mentoring junior lawyers). Firms can mitigate this behavior through careful screening and monitoring of partners. Yet, firm size, geographic dispersion, and lateral turnover make this job more difficult. In addition, the rise of limited liability through LLP and LLC business forms seemingly reduce the downside individual risk of poor monitoring, which means that lawyers become less vigilant monitoring each other.

Big Law as a business model is dead, according to Ribstein's critique, because the firms’ reputational capital is being steadily eroded away by a confluence of pervasive business practices. These include five factors:

(4)
Lack of Shared Downside Risk. The migration away from general partnerships, where vicarious liability for partner behavior is potentially unlimited, to limited liability entities, such as LLPs and LLCs, which typically caps liability to one's capital account.

(5)
Proliferation of Exit. Increased emphasis on lateral partner hiring to grow the firm, which “complicates a firms’ ability to maintain a strong culture of trust and cooperation.”

Notwithstanding the appearance of massive size, Ribstein argued the above pervasive practices have made Big Law remarkably brittle and unstable. As stated at the beginning of this section, reputational capital is what enables individual lawyers to obtain firm-level profits distinct and apart from the sale of their own time and services. Yet, “the firm's reputation lasts only as long as lawyers gain more from investing in it than they do from building their own clienteles.” When lawyers infer that their partners lack such a commitment, they become inclined to “grab” clients and invest in behavior that creates portable clientele, which creates better options for exit. When partner profits fall short of their expectations, they head for the doors. Because so few firms in the Big Law sector have avoided these pitfalls, reputational capital, Ribstein argued, is being rapidly dissipated. As a result, large law firms have increasingly become “just a collection of individuals sharing expenses and revenues that has little or no value as a distinct entity.”

The core message of the Ribstein critique is that The Death of Big Law is caused by the decline of the traditional reputational capital model. As discussed above, this decline is substantially caused by the inability, or failure, of law firms to preserve an environment and ethos where individual lawyers invest in the long-term fortune of the firm.

But according to Ribstein, the value of reputational capital is also declining because of external factors, such as the rise of in-house legal departments. With improved ability to evaluate cost and value, legal departments can avoid the price premiums of Big Law by expanding their own in-house capacity. In-house lawyers also have a proliferating array of options to address their legal needs, including hiring of non-US global law firms, legal process outsourcers with operations in India and other lowcost countries, non-lawyer companies and consultants, and mechanized legal advice or products delivered through sophisticated software as predicted by the lawyer, technology consultant, and futurist, Richard Susskind in this book, The End of Lawyers?.

For many practicing lawyers, the Ribstein Death of Big Law thesis makes little sense. In the year 2010, when Larry made a presentation of this seminal paper to a large audience that included several managing partners, the general reaction was polite bafflement. Sure, there had been layoffs and deferrals, but Big Law was only two years removed from record revenues and profits. Even a year into the recession, the incomes enjoyed by partners were still extraordinarily high by historical standards. Two managing partners and a litigation chair of a major law firm, who were formal commentators on Larry's The Death of Big Law paper, conceded that the Big Law model was going to change, but all agreed that Ribstein's dire predictions were overstated.

2. Contemporary market data

Three years after the symposium that featured the Ribstein Death of Big Law critique, Big Law does not appear to be dead. In fact, Big Law is bigger. Yet, as discussed in this Section, there is evidence that the legal industry serving large organizational clients is undergoing significant restructuring. Section 2.1 presents an array of data that shows the continued dissipation of reputational capital: large law firms are losing market power; lateral activity is on the rise; and leverage (lawyers to number of equity partners) is up. Section 2.2 describes a series of non-lawyer legal vendors that are taking portions of the legal supply chain that was formerly the exclusive domain of large law firms. They are growing very rapidly. Indeed, they bear the hallmarks of a disruptive innovation.

'Pacific Injustice and Instability: Bank Account Closures of Australian Money Transfer Operators' by Ross P. Buckley and Ken C. Ooi
in (2014) 25 Journal of Banking and Finance Law and Practice 243-256 argues

The remittance industry provides international money transfer services for migrant workers and individuals looking to send relatively small amounts of money overseas. Money transfer operators (‘MTOs’) facilitate these international payments and offer services to a segment of the market that is often unserved by banks. An alarming trend is developing in Australia of banks closing the accounts of MTOs. A potential explanation for this trend is the increased cost of regulation and perceived risk to the banks from facilitating these transactions. However, these account closures create real problems for the remittance industry, Australia and the Asia Pacific region.

This paper is in five parts. Part I highlights the importance of remittances to the Pacific Island Countries (‘PICs’) and Australia. Part II explores the trend of account closures in Australia. Part III considers the regulatory framework in Australia, and argues that the current approach to anti-money laundering (‘AML’) and counter-terrorist financing (‘CTF’) regulation has had the unintended consequence of encouraging banks to create financial exclusion. Part IV investigates the factors that have influenced bank behaviour. Finally, Part V explores the role that financial regulation should play in promoting financial inclusio

21 April 2015

A manically sour review by Julian Assange in Newsweek of Luke Harding's The Snowden Files: The Inside Story of the World's Most Wanted Man (2014) may have been fun to write but confirms what many of his critics have written about the reviewer.

Assange characterises the book as "a hack job in the purest sense of the term", "Hollywood bait" and "a turkey" from an institution that should "be called out for institutional narcissism".

His spleen may reflect the comment that

Harding was also the co-author of 2011's WikiLeaks: Inside Julian Assange's War on Secrecy, another tour de force of dreary cash-in publishing, which went on to be the basis for Dreamworks' catastrophic box-office failure: 2013's The Fifth Estate.

From there it's on to

Since I've started praising the book, I might as well continue. As hack jobs by Luke Harding go, a lot of work has gone into this one. Mr. Harding has clearly gone to uncharacteristic lengths in rewriting most of his source material, although it remains in large part unattributed.

Notoriously, as the Moscow bureau chief for The Guardian, Harding used to ply his trade ripping off work by other Moscow-based journalists before his plagiarism was pointed out by The eXile's Mark Ames and Yasha Levine, from whom he had misappropriated entire paragraphs without alteration. For this he was awarded "plagiarist of the year" by Private Eye in 2007.
But—disciplined by experience—he covers his tracks much more effectively here. This book thereby avoids the charge of naked plagiarism.
Yet the conclusion cannot be resisted that this work is painfully derivative. ...

The subtitle of the book, "The Inside Story of the World's Most Wanted Man," is therefore disingenuous. If this is an inside story of Snowden, then anyone can write an inside story of anything.
Something in me has to applaud the chutzpah. There simply isn't a book here. Tangents and trivia serve as desperate page-filler, padding out scarce source material to book length. We are subjected to routine detours through Snowden's historical namesakes, rehearsals of the plot of the James Bond movie Skyfall and lengthy forays into Harding's pedestrian view of Soviet history.
Elsewhere, Harding runs out of external material to recycle and begins to rehash his own, best evidenced in the almost identical Homeric introductions Harding's boss, Alan Rusbridger, receives every time he arrives on the page.
To be fair, not all of the book is secondhand information. The middle chapters, which document The Guardian's internal struggles over the publication of the Snowden information, contain mostly novel anecdotes. True, I'd already heard many of them (The Guardian leaks like a sieve), but it’s convenient to have them all written down in one place.
For most of his narrative, however, Harding is riding on the coattails of other journalists. His is more of a “backside story” than an “inside story.” It reveals a glaring lack of expertise in just about every topic it touches on: the Internet and its subcultures, information and operational security, the digital rights and policy community, hacker culture, the cypherpunk movement, geopolitics, espionage and the security industry.
...

We are left with a "Bullshitter's Guide" to the world of the world's most wanted man. It is a book by someone who wasn't there, doesn't know, doesn't belong and doesn't understand.
Where the book is accurate, it is derivative. And where it is not derivative, it is not accurate. … The result is a story that is a non-story—a generic rendition of the Snowden cycle where lifeless bromide and imagined melodrama stand in for authentic human narrative.

In case you haven't got the message, Assange claims

As you'd expect from a serial plagiarist, the book is a stylistic wasteland. There are no regular impasses in here, only the more refined kind of "impasse we can't get past." Never simply "deny" when you can "categorically deny." Sympathetic characters are always either "wry" or "calm"; that is their entire emotional repertoire.
The words "Orwellian," "Kafkaesque" and "McCarthyite" seem to apply to everything. Far too much is found to be "ironic," all too often "cruelly" so. Cliché after cliché sweeps by in a wash of ugly prose until you are overwhelmed with the cynical functionalism of the thing.
It wouldn't be a Guardian book without some institutional axe-grinding. I made the mistake of glancing at the index before I read the book. There I spotted my name, with the following reference:

Assange, Julian; Manichean world view of..........224.

There is something about seeing my "Manichean world view" casually assigned its own index entry that epitomizes the Guardian's longstanding, cartoon-like vendetta against me.
...

If anyone should answer to the charge of "Manicheanism," it is Harding, who, when he is not slogging through clumsy Hollywood film treatments smearing whistleblowers, can be found busily obsessing over Putin in the pages of The Guardian.
…

Thanks to Russia (and thanks to WikiLeaks), Snowden remains free. Only someone with a "Manichean world view" would be unable to acknowledge this.
The most disappointing thing of all about The Snowden Files is that it is exploitative. It should not have existed at all. We all understand the pressures facing print journalism and the need to diversify revenue in order to cross-subsidize investigative journalism. But investigative journalism involves being able to develop relationships of trust with your sources.
There is a conflict of interest here. Edward Snowden was left in the lurch in Hong Kong by The Guardian, and WikiLeaks had to step in to make sure he was safe. While WikiLeaks worked to find him a safe haven, The Guardian was already plotting to sell the film rights.
How can one reconcile the duty to a source with the mad rush to be the first to market with a lucrative, self-glorifying, unauthorized biography? For all the risks he took, Snowden deserves better than this.

Conclusion, according to Mr Assange?

The Snowden Files is a walloping fraud, written by frauds to be praised by frauds.

'An Overreaction to a Nonexistent Problem: Empirical Analysis of Tort Reform from the 1980s to 2000s' by Scott DeVito and Andrew Jurs in (2015) 3 Stanford Journal of Complex Litigation 62 comments -

Proponents of tort reform have suggested it is a necessary response to rising personal injury litigation and skyrocketing insurance premiums. Yet the research into the issue has mixed results, and the necessity of tort reform has remained unproven.

We decided to research an underdeveloped area by empirically testing the real world effects of noneconomic damages caps. To do so, we assembled a database of nearly fourteen million actual cases filed between 1985 and 2009 and then measured how damages caps affect filing rates for torts. Not only could we analyze the change in filings after adoption of a cap but we could also measure the effect of elimination of a cap as well. When we did, we found something unique in the literature.

We found first that when a state adopts a noneconomic damages cap, there is a statistically significant drop in filings of all torts and for medical malpractice torts. We also found that in both the 1990’s and 2000’s, the rate of filings dropped consistently as well – both in states with tort reform but also in states without it. Therefore, our finding of a statistically significant reduction in filings in response to damages caps demonstrates a “doubling-down” effect: there is one drop in filings due to the damages cap, but there is another drop based on larger background forces.

Next, when assessing the change in filings after elimination of a damages cap, we found something initially counterintuitive but also new to the literature. While one might expect a sharp increase in filings when a cap disappears, our analysis could find no statistically significant change in the filing rate for all torts after elimination, while medical malpractice filings continued to decline overall. We believe that this finding demonstrates and quantifies, for the first time, the non-legal effect of tort reform measures discussed by commentators like Stephen Daniels and Joanne Martin.

We believe the combination of the ‘doubling down’ on plaintiffs as well as the quantifiable non-legal changes in response to damages caps significantly modifies the cost-benefit analysis of tort reform. In Trammel v. United States, the Supreme Court stated: ‘we cannot escape the reality that the law on occasion adheres to doctrinal concepts long after the reasons which gave them birth have disappeared and after experience suggest the need for change’. Based on our empirical assessment, we conclude that tort reform has reached that point and call upon state legislators to reconsider these measures.

Open-plan office layout is commonly assumed to facilitate communication and interaction between co-workers, promoting workplace satisfaction and team-work effectiveness. On the other hand, open-plan layouts are widely acknowledged to be more disruptive due to uncontrollable noise and loss of privacy. Based on the occupant survey database from Center for the Built Environment (CBE), empirical analyses indicated that occupants assessed Indoor Environmental Quality (IEQ) issues in different ways depending on the spatial configuration (classified by the degree of enclosure) of their workspace. Enclosed private offices clearly outperformed open-plan layouts in most aspects of IEQ, particularly in acoustics, privacy and the proxemics issues. Benefits of enhanced ‘ease of interaction’ were smaller than the penalties of increased noise level and decreased privacy resulting from open-plan office configuration.

The authors note

There exists a large body of literature looking at how physical environment influence occupants' perception and behaviour in office buildings. As office layout has transitioned in recent decades from conventional private (or cellular) spatial configuration to modern open-plan, the impacts on occupants and organisations have been extensively studied from a variety of perspectives in disciplines as diverse as architecture, engineering, health and psychology.

In addition to tangible economic benefits of open-plan offices such as increased net usable area, higher occupant density and ease of re-configuration (Duffy, 1992 and Hedge, 1982), the open-plan office layout is believed by many to facilitate communication and interaction between co-workers by removing internal walls, which should improve individual work performance and organisational productivity (Brand and Smith, 2005 and Kupritz, 2003). However there is not much empirical evidence to support these widespread beliefs (Kaarlela-Tuomaala et al., 2009 and Smith-Jackson and Klein, 2009). On the contrary, a plethora of research papers identify negative impacts of open-plan office layout on occupants' perception of their office environment. For example, some longitudinal survey results have demonstrated a significant decline in workspace satisfaction (Sundstrom, Herbert, & Brown, 1982), increased distraction and loss of privacy (Kaarlela-Tuomaala et al., 2009), and perceived performance decrement (Brennan, Chugh, & Kline, 2002) after relocation of employees from enclosed workplace to open-plan or less-enclosed workplace. Moreover, the occupants in these studies didn't adapt or habituate to the change in spatial layout (Brand and Smith, 2005, Brennan et al., 2002 and Virjonen et al., 2007), and many researcher draw the causal link between declining environmental satisfaction and deteriorating job satisfaction and productivity (Sundstrom et al., 1994, Veitch et al., 2007 and Wineman, 1982). Still other research studies attribute escalating Sick Building Syndrome (SBS) symptoms such as distress, irritation, fatigue, headache and concentration difficulties (Klitzman and Stellman, 1989, Pejtersen et al., 2006 and Witterseh et al., 2004) to open-plan office layout.

An extensive research literature consistently identifies noise and lack of privacy as the key sources of dissatisfaction in open-plan office layouts (Danielsson and Bodin, 2009, de Croon et al., 2005 and Hedge, 1982). Firstly, studies based on either occupant surveys and laboratory experiment report that noise, in particular irrelevant but audible and intelligible speech from co-workers, disturbs and negatively affects individual performance on tasks requiring cognitive processing (Banbury and Berry, 2005, Haka et al., 2009, Smith-Jackson and Klein, 2009 and Virjonen et al., 2007). The loss of productivity due to noise distraction estimated by self-rated waste of working time was doubled in open-plan offices compared to private offices, and the tasks requiring complex verbal process were more likely to be disturbed than relatively simple or routine tasks (Haapakangas, Helenius, Keekinen, & Hongisto, 2008). Also, Evans and Johnson (2000) argue that exposure to uncontrollable noise can be associated with fall in task motivation. Secondly, with a reduced degree of personal enclosure, open-plan layout often fails to isolate the occupants from unwanted sound (i.e. sound privacy) and unwanted observation (i.e. visual privacy), resulting in the overall feeling of loss of privacy and personal control over their workspace (Brand and Smith, 2005, Brill et al., 1985, Danielsson and Bodin, 2009 and O'Neill and Carayon, 1993). Consequently, occupants experience excessive uncontrolled social contact and interruptions due to close proximity to others and perceived loss of privacy, known as overstimulation, which leads to occupants' overall negative reactions toward their office environment (Maher and von Hippel, 2005 and Oldham, 1988).

Although that the absence of interior walls in open-plan office layout purportedly improves communication within teams and, in turn, enhances employee satisfaction, the presumption of improved workplace satisfaction is yet to be verified. Indeed, the disadvantages of open-plan offices dominate previous research outcomes. To date there has been no attempt at quantifying pros and cons of the open-plan office layout. Hedge (1982) opined that the improved social climate within open-planed offices was insufficient to offset the occupants' negative reactions to this spatial workplace configuration, but attached no empirical evidence to support this argument. Thus the primary objective of this paper is to weigh up the positive impact of the purported advantages of open-plan office (i.e. interaction between colleagues) against the negative impact of the disadvantages (i.e. noise and privacy) in relation to occupants' overall satisfaction with their workspace. This study also explores how occupants' attitude toward indoor environment changes between different office layouts categorized depending on the degree of personal enclosure. For example, an occupant located in a spacious private office would have different expectations or priority for Indoor Environmental Quality (IEQ) compared to an occupant located in a dense, open-plan office.

To summarise, the research questions addressed in this paper are:

(1)
Does occupant satisfaction with various IEQ factors change depending on different office layouts?

(2)
Does the priority of various IEQ factors (i.e. relative importance for shaping occupants' overall workspace satisfaction) differ between occupant groups in different office layouts?

(3)
Do the benefits such as easiness of interaction between co-workers offset the disadvantages such as distraction by noise and loss of privacy in the open-plan office layout?

19 April 2015

'On Evidence: Proving Frye as a Matter of Law, Science, and History' by Jill Lepore in (2014-2015) 124(4) Yale Law Journal 882 offers

a cautionary tale about what the law does to history. It uses a landmark ruling about whether scientific evidence is admissible in court to illustrate how the law renders historical evidence invisible. Frye v. United States established one of the most influential rules of evidence in the history of American law. On the matter of expert testimony, few cases are more cited than Frye.In a 669-word opinion, the D.C. Circuit Court of Appeals established the Frye test, which held sway for seven decades, remains the standard in many states, and continues to influence federal law. “Frye,” like “Miranda,” has the rare distinction of being a case name that has become a verb. To be “Frye’d” is to have your expert’s testimony deemed inadmissible. In Frye, the expert in question was a Harvard-trained lawyer and psychologist named William Moulton Marston. Marston’s name is not mentioned in the court’s opinion, nor does it generally appear in textbook discussions of Frye, in the case law that has followed in its wake, or in the considerable legal scholarship on the subject. Marston is missing from Frye because the law of evidence, case law, the case method, and the conventions of legal scholarship — together, and relentlessly — hide facts. It might be said that to be Marston’d is to have your name stripped from the record. Relying on extensive archival research and on the narrative conventions of biography, this Essay reconstructs Marston’s crucial role in Frye to establish facts that have been left out of the record and to argue that their absence is responsible for the many ways in which Frye has been both narrowly and broadly misunderstood.

Lepore states

The lecture had only just begun when there came a rap at the door. The professor, who wore owl’s-eye spectacles, walked across the room and opened the door. A young man entered. He wore leather gloves. In his right hand, he carried an envelope. Tucked under his left arm were three books: one red, one green, one blue. He said he had a message to deliver; he spoke with a Texas twang. He handed the professor the envelope. While the professor opened the envelope, pulled out a yellow paper, and read its contents, the messenger slid a second envelope into the professor’s pocket. Then, using only his right hand, he drew from another pocket a long, green-handled pocketknife. Deftly, he opened the knife and began scraping his gloved left thumb with the edge of the blade, sharpening it on the leather like a barber stropping a razor.

The class was a graduate course called Legal Psychology, held at American University, in Washington, D.C. It met twice a week, in the evening, in 1922. There were eighteen students, all of them lawyers. They had come to the lecture hall, a building at 1901 F Street, two blocks from the White House, after either a day at the office or a day in court; many of them worked for the federal government. In the course catalog, the professor—a twenty-eight-year-old graduate of Harvard Law School who had earned his Ph.D. in Harvard’s psychology department only the year before—listed a prerequisite: “Students must have a working knowledge of the principles of Common Law to qualify for this course, which is especially designed for practicing attorneys and lawyers having a genuine and active interest in raising the standards of justice in the actual administration of the law.” He was possessed of a certain ambivalent idealism.

The professor finished reading whatever was written on that sheet of yellow paper, said something to the Texan, and sent him on his way. Then, turning to his class, the professor informed his students that the man who had just left the room was not, in fact, a messenger at all; he was, instead, an actor, following a script written by the professor as part of an elaborate experiment. Imagine, the professor likely went on to say, that the man who was here a moment ago has since been arrested and charged with murder. Please write down everything you saw. Eighteen lawyers picked up their pencils.

In preparing the experiment, the professor had identified 147 facts that the students could have observed: the number and color of the books the messenger held, for instance, and the fact that he held them under one arm, his left. After the students had written down everything they’d seen, the professor examined them, one by one; then he cross-examined them. After class, he scored their answers, grading them for completeness, accuracy, and caution (you’d get a point for “caution” if, upon either direct or cross-examination, you said, “I don’t know”). Out of 147 observable facts, the students, on average, noticed only thirty-four. Everyone flunked. And no one, not a single student, noticed the knife.

The professor, William Moulton Marston, had designed this experiment in order to demonstrate to a room full of practicing attorneys that eyewitness testimony is unreliable. The demonstration was not without effect. Days later, two of Marston’s students became involved in a murder trial whose appeal, in Frye v. United States, established one of the most influential rules of evidence in the history of American law. On the matter of expert testimony, few cases are more cited than Frye. The 669-word opinion of the D.C. Circuit Court of Appeals established a new rule of evidence: the Frye test. This rule held sway for seven decades, remains the standard in several states, and continues to influence federal law. “Frye,” like “Miranda,” has the rare distinction of being a case name that has become a verb. To be “Frye’d” is to have your expert’s testimony deemed inadmissible.

Frye was an alleged murderer named James Alphonso Frye. People who cite the case usually know no more about him than his last name. They know even less about the expert called by his defense. That expert was Marston. Marston’s name is not mentioned in the opinions of either the trial or the appellate court. Nor, generally, does his name appear in textbook discussions of Frye, in the case law that has followed in its wake, or in the considerable legal scholarship on the subject of expert testimony. Marston is missing from Frye because the law of evidence, case law, the case method, and the conventions of legal scholarship—together, and relentlessly—hide facts. This Essay Marston-izes Frye, finding facts long hidden to cast light not only on this particular case but also on the standards of evidence used by lawyers, scientists, and historians. It uses a landmark ruling about whether scientific evidence is admissible in court to illustrate how the law renders historical evidence invisible.

The law of evidence began in earnest in the early modern era; the history of evidence remains largely unwritten. Before the eighteenth century, the written rules of evidence were few. In 1794, Edmund Burke said that they were “comprised in so small a compass that a parrot he had known might get them by rote in one half-hour and repeat them in five minutes.” But even as Burke was writing, treatises at once examining and codifying exclusionary rules had already begun to proliferate. This sort of work reached a new height at the beginning of the twentieth century with the publication of John Henry Wigmore’s magisterial, four-volume A Treatise on the System of Evidence in Trials at Common Law. Wigmore’s study of the law of evidence remains a towering influence in the “New Evidence Scholarship,” which emerged in the 1980s, following the adoption of the Federal Rules of Evidence in 1975. The law of evidence is vast; the history of evidence is scant. This is to some degree surprising, because in the last decades of the twentieth century, literary scholars, intellectual historians, and historians of the law and of science became fascinated by epistemological questions about the means by which ideas about evidence police the boundaries between disciplines—a fascination that produced invaluable interdisciplinary work on subjects like the history of truth, the rise of empiricism, and the fall of objectivity. But this line of inquiry has a natural limit: scholars who are engaged in a debate about whether facts exist tend not to be especially interested in digging them up. For all the fascination with questions of evidence, very few scholars have investigated the nitty-gritty, stigmata-to-DNA history of the means by which, at different points in time, and across realms of knowledge, some things count as proof and others don’t.

This Essay chronicles a turning point in the history of evidence. During the last decades of the nineteenth century and the first decades of the twentieth, I argue, standards of evidence in law, science, and history underwent transformations that were at once related and, to a considerable degree, at odds: the case method became standard; modern, government-funded scientific research began; and history, as an academic discipline, attempted to ally itself with the emerging social sciences by establishing a historical method. Curiously enough, the queer career of an obscure Harvard-trained lawyer and scientist who wore owl’s-eye spectacles lies, if not at the heart of this shift, deep in its gut, well stuck.

When that messenger with a Texas twang came to Marston’s lecture hall, he did everything he was told. He spoke his lines. He shifted his books. He reached into his pocket. He sharpened a blade. Marston’s law students, watching, observed almost none of this: they missed three out of every four facts. Case law is like that, too, except that it doesn’t only fail to notice details; it conceals them. This Essay, then, is a cautionary tale about what the law does to history: it hides the knives.

Copyright & Liability

Statements in this blog are my own, rather than that of the University of Canberra.

The text and images are protected under Australian and international copyright and trade mark law. The blog does not represent legal advice. It is for informational purposes only; publication does not create an attorney-client relationship and nothing on this blog constitutes a solicitation for business.

The author pleads guilty to charges of irreverence, irony, indignation and honestly-held opinion.