Issues concerning libraries and the law - with latitude to discuss any other interesting issues Note: Not legal advice - just a dangerous mix of thoughts and information. Brought to you by Mary Minow, J.D., A.M.L.S. [California, U.S.] and Peter Hirtle, M.A., M.L.S. Follow us on twitter @librarylaw

I’ve been thinking about sound recordings because I am currently serving on a task force of the National Recorded Sound Preservation Plan that is looking at copyright issues in recorded sound preservation. The experience has confirmed what we all know – namely, that any institution that wants to act in this space is going to have to assume a certain amount of risk.

I was interested, therefore, to read in the Chronicle of Higher Education about Judaica Sound Archives at Florida Atlantic University. In spite of the fact that little in their collection is in the public domain, it has still digitized about 45% of its holdings. Much of the material is made available through stand-alone Judaica Sound Archives Research Stations that are distributed to other universities and research centers and which may include copyrighted sound recordings for which permission has not been secured.

A risk-adverse copyright lawyer, after looking at the Archives’ digitization and distribution activities as well as the web site’s possibly incorrect characterization of pre-1923 recordings as being in the public domain (I couldn’t find the metadata that provided information on place of publication), would shut the whole thing down. What I like about the Judaica Sound Archives is that it is seemingly willing to accept the risk in order to preserve and make accessible an important part of our culture. The absence of legal action against it is evidence that it is taking a reasonable risk, even if the law does not explicitly authorize its actions that are undertaken without the permission of the copyright owner.

Libraries, archives, and museums should know when they are taking risks and what the extent of that risk might be. The mere presence of slight risk, however, should not paralyze them from doing things that are socially desirable.

I am delighted to be able to announce the publication today of Copyright and Cultural Institutions: Guidelines for U.S. Library, Archives, and Museums by Peter B. Hirtle, Emily Hudson, and Andrew T. Kenyon. Published by Cornell University Library, this 260 page book is is described in the press release below. It is available for free download, but is probably more usable as a print copy available for $39.95. Tell your employer that you need a copy!

ITHACA, N.Y. (Oct. 29, 2009) – How can cultural heritage institutions legally use the Internet to improve public access to the rich collections they hold?

"Copyright and Cultural Institutions: Guidelines for Digitization for U.S. Libraries, Archives, and Museums,” a new book by published today by Cornell University Library, can help professionals at these institutions answer that question.

Based on a well-received Australian manual written by Emily Hudson and Andrew T. Kenyon of the University of Melbourne, the book has been developed by Cornell University Library’s senior policy advisor Peter B. Hirtle, along with Hudson and Kenyon, to conform to American law and practice.

The development of new digital technologies has led to fundamental changes in the ways that cultural institutions fulfill their public missions of access, preservation, research, and education. Many institutions are developing publicly accessible Web sites that allow users to visit online exhibitions, search collection databases, access images of collection items, and in some cases create their own digital content. Digitization, however, also raises the possibility of copyright infringement. It is imperative that staff in libraries, archives, and museums understand fundamental copyright principles and how institutional procedures can be affected by the law.

“Copyright and Cultural Institutions” was written to assist understanding and compliance with copyright law. It addresses the basics of copyright law and the exclusive rights of the copyright owner, the major exemptions used by cultural heritage institutions, and stresses the importance of “risk assessment” when conducting any digitization project. Case studies on digitizing oral histories and student work are also included.

Hirtle is the former director of the Cornell Institute for Digital Collections, and the book evolved from his recognition of the need for such a guide when he led museum and library digitization projects. After reading Hudson and Kenyon’s Australian guidelines, he realized that an American edition would be invaluable to anyone contemplating a digital edition.

Anne R. Kenney, the Carl A. Kroch University Librarian at Cornell University, noted: “The Library has a long tradition of making available to other professionals the products of its research and expertise. I am delighted that this new volume can join the ranks with award-winning library publications on digitization and preservation.”

As an experiment in open-access publishing, the Library has made the work available in two formats. Print copies of the work are available from CreateSpace, an Amazon subsidiary. In addition, the entire text is available as a free download through eCommons, Cornell University’s institutional repository, and from SSRN.com, which already distributes the Australian guidelines.

Cornell University is an Ivy League institution and New York's land-grant university. Among the top ten academic research libraries in the country, Cornell University Library reflects the university's distinctive mix of eminent scholarship and democratic ideals. The Library offers cutting-edge programs and facilities, a full spectrum of services, extensive collections that represent the depth and breadth of the university, and a deep network of digital resources. Its impact reaches beyond campus boundaries with initiatives that extend the land grant mission to a global focus. To learn more, visit library.cornell.edu.

One of the real surprises at the recent “D is for Digitize” conference was the presentation by Daniel Reetz of his DIY Bookscanner project. I don’t spend as much of my time tracking scanning developments as I used to, but his project was all new to me. His presentation, which begins at about 46 minutes into the video, is well worth watching if you are at all interested in scanning technologies (or want to learn how to give an entertaining talk at a conference).

Daniel was kind enough to comment on my brief report of the conference, and I responded briefly to his remarks there. His project, however, is worth more description, even if it is a little off-topic for this blog.

After first constructing a book scanner from scrap, Daniel has since created a portable, machined, sophisticated book scanner that is still a do-it-yourself project. It uses digital cameras and a book cradle to allow one to generate images that software can combine into a PDF of a book. The specifications and instructions are freely available.

Daniel’s set-up is not going to be a replacement for the high-end robotic scanners from Kirtas, Treventus, Qidenus, 4DigitalBooks (which are compared in a useful report by Julian Ball), and certainly not for the 1,000 pages per second(!) Japanese scanner discussed by Jill Hurst-Wahl. It is probably not a replacement for high-end manual face-up scanners such the one Digital Fusion used to scan Jung’s Red Book. (And if you want to see an over-the-top video of scanning, one that makes the activities that take place in hundreds of libraries every day sound like it is of earth-shattering importance, watch this.)

One does wonder, though, whether DIY scanning might be an affordable alternative for libraries that can’t afford things like the I2S Copibook or Zeutschel’s or Atiz’s book scanners (including their consumer model). It could also be an alternative for those who have aging Minolta PS3000s and PS7000s.

Let’s hope that someone conducts a formal evaluation of DIY bookscanning and its possible applications in libraries. As digital cameras decrease in price and increase in performance, I am willing to believe that a camera-based scanning system could come close to competing with professional products.

Libraries take different approaches to handling the viewing of pornography on library computers, responsive to circumstances and their own communities. Federal law has been upheld by the U.S. Supreme Court that allows libraries to try to filter out child pornography, obscenity, and as to material viewed by children, material deemed "harmful to minors."

When it comes to child pornography, however, libraries should report both patrons and websites found on library computers.

It gives background to the current law which has been amended several times, including by the PROTECT Act which prohibits any “digital image, computer image, or computer-generated image that is, or is indistinguishable from, that of a minor engaging in sexually explicit conduct,” even if no actual minor was used to produce the image.

It is a crime to knowingly receive or distribute child pornography, or to knowingly possess or access with intent to view “any book, magazine, periodical, film, videotape, computer disk, or any other material that contains an image of child pornography..."

If a library sees a patron looking at child porn, it should be reported to the police.

The New York State Board of Regents met this week to discuss making permanent its proposed regulations on deaccessioning from museums and historical societies, and it punted.

As I wrote earlier, the Board had issued draft permanent regulations for public comment. The permanent regulations were to replace a series of emergency regulations that it has issued since December, 2008. After the required 30 day comment period, the Board was to vote at its 19 October meeting on making the amendment to existing regulations permanent.

Instead the Board of Regents voted another temporary emergency regulation and directed the Education Department to start another 30 day comment period on the proposed permanent amendment. No timetable for future action is provided, but the new emergency regulations, which go into effect on 14 November, only last for 60 days. That means that the Board will have to act at its 11-12 January meeting, so we can expect a new proposal by the beginning of December at the latest.

Why the delay in adopting permanent regulations? There is no explanation in the new emergency regulations that were passed, but hints. There may have been some confusion over dates. The regulations were published in the NY State Register on 26 August, but the memo to the Board states that they were published on the 29th. If this were true (which it is not), then the required 30 day public comment period would not have been met.

I suspect, however, that it is more likely that the Department of Education just didn’t have the time to assess the public comments received on the proposed regulations. The “Statement of Facts and Circumstances which Necessitate Emergency Action” indicates that about 30 public comments were received on the December 2008 emergency regulation, but it does not report how many comments were received this time. Nor are the public comments posted on the Department’s web site. The statement does report, however, that “Further revisions to the proposed rule are anticipated in response to review and recommendation by Department staff,” which suggests that they may have received some comments that may modify their thinking.

In the interim, as “Culture Grrl” Lee Rosenbaum noted, the emergency regulation differs from the proposed permanent regulation. It appears that the Board simply passed the same emergency regulation it adopted in July, 2009. Rosenbaum finds this problematic since the emergency regulations give four reasons when a historical society or museum can deaccession an item, whereas the proposed permanent regulations provided ten. In my comments to the Board, I noted two other justifications for deaccessioning – one from the University of Wyoming, and one passed on an experience at Cornell University - and that was without really thinking that hard about it.

The issue highlights for me the futility of the entire regulatory process. As soon as you try to limit what can be done, a new, justifiable option will occur. No regulations can have the flexibility of best professional practices. Even the current strictures that govern the use of the proceeds from a deaccessioning sale are under legal and ethical scrutiny. The Board of Regents and Education Department should stop trying to micromanage cultural institutions in the state and instead simply require that the governing boards of those institutions operate according to best professional practice and with the mission of the institution in mind.

As the length of my posts suggests, James Grimmelmann put on a very thought-provoking symposium. The issues in GBS are hard, with no-clear right and wrong answers. The discussion at the conference only made the decision more difficult because there is good on both sides.

A few final thoughts:

The discussion of the Rule 23 class action procedures was a real eye-opener for me. To this non-lawyer, it sounds like the authors’ class has been poorly represented by its counsel. The failure to include academic and foreign authors early in the negotiations has led to a multitude of mistakes. I don’t see any easy way of bringing fair representation to this group now, and fear that the entire settlement will collapse as a result. My only consolation would be that the lawyers would not get their obscene $35 million payout.

Similarly, I was struck by how much of the anger and distrust directed at Google should really be directed at the Authors Guild and the AAP. Does anyone really believe, for example, that it is Google’s desire to charge for access to out-of-print books (when it is giving away public domain books for free)? How much of the final settlement was Google forced to accept in order to make the full text of books available? Maybe it is time that we stopped calling this the “Google Book Settlement” and started calling it the “AG and AAP Book Settlement.”

I was disappointed to hear an oft-repeated suggestion that GBS needs to be scaled back in significant ways - by making it opt-in rather than opt-out, or by excluding foreign works, or by limiting what can be done with books in the settlement. The more restricted and limited the settlement, the less useful the entire database will be. The decrease in utility will come with no meaningful increase protection in rights holders rights, since rights holders that think they are getting paid too little (for work that is generating no revenue now) can always opt out.

Even if we accept the idea that the settlement should go forward (and I do), there is plenty of room for improvement. The concessions that Google has announced in the past few months (privacy guidelines, the extension of commercial availability to outlets outside the US, the addition of CC licenses for material, etc.) are all things that could and should have been addressed during the negotiations, before the release of the settlement. Again, the failure to properly represent the author and publisher classes has been the source of innumerable problems. There is room for other improvements – especially more price controls on the institutional subscription.

Much of the criticism of GBS surrounds the issue of orphan works, but there is much confusion (perhaps some of which is deliberate) over what is an orphan work and how GBS would impact those works. Michelle Woods from the Copyright Office presented an overview of its orphan work study and subsequent legislation. Harry Lewis reminded everyone that books spend most of their life in the public domain where they can be freely used. Furthermore, authors want to be read, so the desire for wide dissemination, rather than profit, is the underlying motive for most books. Implicit in his remarks was a concern that GBS might hamper access to our cultural heritage rather than fostering it.

For me, the highlight of this session was Jule Sigall’s talk. Sigall was the lead author of the Copyright Office’s orphan works report. While the Copyright Office may exist to protect the interests of rights holders, in its orphan works study it broke new ground in trying to foster public access to otherwise unusable works. I take Sigall’s opinions seriously, therefore.

His criticism of GBS was pretty damning. It will, he stated, lead to dystopia. Proponents of the settlement, he argued, believe that it will help copyright move to a more rational system that include formalities such as registries that will identify rights holders. If authors don’t make themselves known, you can then use their work.

Sigall believes GBS establishes a system that is the opposite, for the following reasons:

Google is the only one under the settlement who gets the benefit to use orphan works. It is the settlement, not the BRR, that gives them the right to use orphan works. BRR could license some of the collection, but not all.

Similarly, the BRR won’t be able to license the use of orphan works.

The 1976 Copyright Act may actually authorize more uses than the GBS allows. Sigall suggested we compare the position of the Author’s Guild criticism of orphan works with its implementation of the settlement.

GBS removes Google’s up-front obligations to locate and secure permission from copyright owners, but then limits what down-stream uses Google can make of the work. (Aside: This would appear to be a different reading of the settlement from many of its critics, who worry that it gives Google a free hand to do whatever it wants with unclaimed books.) Orphan works legislation would have required Google to do more work up-front, but then have a much freer had to use works if a rights owner can’t be found.

Sigall is one of those who worries about whether Google will protect privacy.

A is for Antitrust

If I know little about the class action issues discussed in the conference’s first panel, I know even less about the antitrust issues discussed in this panel. Hence I found all four panelists to be equally convincing – even though they came down on opposite sides of the antitrust issue.

Here are a few of the highlights that I noted:

Matthew Schruers did not see competition issues in GBS. First, Google is developing an open system using non-proprietary file formats (ePubs, PDF) and open APIs that is accessible to any browser. Second, the consensus is that the public is better off with the service than without it. (Critics, on the other hand, are saying that “no service is better than this service.”) Lastly, he dismissed the notion that Google’s exclusive license on an unknown quantity of orphan works gives them a substantial advantage over competitors. An exclusive license to use books that no one wants, he suggested, does not give you much of advantage over others.” (Of course, there are times when a researcher wants a particular title or edition and no substitute will work as well.

Sherwin Siy argued that Google is likely to be a working monopoly because it will be the only entity able to sell orphan works. Precisely because one book cannot be substituted for another, GBS must fail on anti-trust grounds.

Einer Elhauge gave a very accessible version of his work on anti-trust issues in GBS. He encouraged us to use the “but-for” baseline – does the settlement lower consumer welfare from what it would be without a settlement? If you benefit consumers, but could have benefited them more, that is not antitrust. Furthermore, GBS, while removing Google’s entry barriers to distributing comprehensive set of digital books, does nothing to raise barriers to others entering the arena. First movers, he suggests, should get an advantage.

The afternoon sessions presented less that was entirely new to me and my note-taking skills started to flag, so the notes below have less on the actual presentations and more commentary from me.

K is for Keynote

Pamela Samuelson and Paul Courant presented a very engaging lunch-time presentation. I can’t do a better job of summary than LJ has, so I won’t try.

C is for Culture

The four speakers in the afternoon session turned away from legal issues and instead looked at some of the broader cultural issues associated with GBS. The always-entertaining Paul Duguid, for example, while praising the existence of the Google Books database, worried about the scanning and metadata problems he has uncovered in it. His argument (and Geoffrey Nunberg’s similar rant) have always struck me as a little odd for two reasons. First, it strikes me that Google has been able to replicate in a period of 5 or 6 short years almost all the cataloging errors that it has taken librarians over a century to accumulate. Anyone want to make a guess as to who can clean up their data faster? Second, I don’t see Google as a library and don’t expect the same level of bibliographic accuracy from them as I do from a research library. If you want good metadata, then make sure that Google competitors (such as the Hathi Trust) develop quickly.

The really odd presentation was by Daniel Reetz of his scrap-material, low-cost do-it-yourself scanner. The talk – and the scanner – excited many in attendance and has been the subject of many blogs from the conference, including posts from Robin Sloan, Harry Lewis, and Eric Hellman. But it struck me as particularly odd. First, a common criticism of Google’s project has been its sub-standard scanning, which is far from preservation quality. There was no discussion at all, however, of the quality of the images produced from this little machine. Second, at a conference that had as a subtext whether Google was being respectful enough of copyright owners and publishers, we had a presentation on an approach that could ignore publishers and authors completely. Weird.

Are there situations in which the DIY Scanner might be useful? Sure. I always work on the assumption that something is better than nothing, and if you can’t afford a face-up scanner and want something faster than a camera and tripod, this would work. It might make it possible to digitize your home library. You can also by a turntable that digitizes all the albums in the basement – but most find it easier to buy a better copy at iTunes.

P is for Public

The final session was devoted to public interest issues in the settlement. Lateef Mtima, who presented himself as normally a defender of rights holders, reminded everyone of the tremendous social good that GBS could bring to underserved populations. Chris Danielsen presented a moving argument in favor of increasing access to books for the visually impaired.

Cindy Cohn from EFF and John Verdi from EPIC talked about the importance of privacy issues in the settlement (as did Carrie Russell, in a masterful presentation that outlined the mixed feelings that most librarians have about the settlement). In a recent Digital Campus podcast, the hosts suggested that the privacy community is trying to use GBS as a place to argue their general concerns with privacy on the Internet, and I didn’t hear anything in the session to dissuade me of that.

The privacy issue in GBS seems particularly odd to me. First of all, I don’t view Google as library and so don’t expect it to follow library confidentiality statutes. Libraries should only subscribe to the database if Google meets professional standards regarding privacy – but right now those are pretty low. For example, the International Coalition of Library Consortia’s “Privacy Guidelines for Electronic Resources Vendors” only requires that vendors (such as Google would be) “respect the privacy of the users of its products” and not disclose such information to a 3rd party without permission. Critics are expecting more from Google than any other library vendor.

Kiran Raj (filling in for Michael Guzman), Cynthia Arato, and Jonathan Band opened the panels portion of the conference with what was for me (as a non-lawyer who has looked primarily at copyright) a tremendously useful introduction to the legal issues surrounding the settlement. Much of the session was an explication of Rule 23, the federal procedures governing class action settlements. Since this was all new to me, I will probably present it in too much depth.

Kiran Raj noted that the class in GBS has not been certified yet, and the issue of whether it can be certified is therefore open. (Judges are also supposed to give heightened scrutiny to settlements involving non-certified classes). He outlined four requirements for certifying a class:

Numerosity - it must be impractical to otherwise join all members. There is no question in this case, since the number of potential class members numbers in the millions.

Commonality - there must be elements that are common to all members.

Typicality – are the claims typical to the entire class?

Adequacy of representation – has the class counsel adequately represented the interests of all the members of the class?

Other elements that must be taken into account under the heightened scrutiny required when a class is not certified but a settlement is proposed are the fairness of the settlement to the proposed class members and the sufficiency of the notice required by by Rule23(e). Many of the objectors to the settlement cited the poor notice.

Cynthia Arato built on the theme of objections to the settlement, and suggested that the fact that the the settlement has been withdrawn suggests that the parties recognized the validity of some of the objections.

Here are some of the objections she noted:

Notice. Objectors argue that given scope of GBS and its affect on property rights going forward, it is important to provide notice to all class members – something GBS has not done. GBS also waives trademark and other rights, a fact not mentioned the notice that did go out.

The promised translations of the settlement were not done.

The plaintiffs are domestic authors and publishers. They cannot therefore represent foreign class members and owners of orphan works (though Jonathan Band later pointed out the named publishers include international publishers, such as Bertelsmann). GBS also binds foreign authors who may not have yet had their rights infringed by Google.

No jurisdiction. The initial lawsuit was about scanning and snippets; the settlement is much, much broader, and hence inappropriate.

Jonathan Band then presented a useful summary of DOJ’s objections.

DOJ was the game-changing filing. DOJ echoed Cynthia Arato’s procedural points on Rule 23. Notice was not adequate, representation of foreign rights holders may not have been adequate, and there is a tension between interests of parties and orphan works. The DOJ solution: scale settlement back to the issue in the litigation and fight over snippet display. (Of course, to my mind doing so would destroy much of the game-changing utility of the proposed settlement).

The other half of DOJ’s brief talked about some of the settlement’s negative impacts on competition, including the potential for price fixing in consumer purchase and that fact that only Google can offer services for unclaimed works. Again, DOJ’s solution was to step back to snippet display.

Band noted that the brief was in tension with itself. The first half calls for more respect of the interests of rights holders. The second half calls for more competition, which might actually hurt the rights holders more. He did concede that he may have misread the brief – or that some of DOJ’s objections in it are more serious than others. Both he and Kieran Raj noted how unusual it is for the Antitrust Division to make comments about Rule 23. Almost half of the brief is about Rule 23, and that is different. Jonathan also thought it odd that DOJ is so concerned about foreign rights holders. He has since been told that the DOJ filing was more than just Antitrust and was intended to represent the whole government (possibly including the Copyright Office).

A number of interesting issues came up in the discussion afterwards. The biggest that I could see is that assuming that one scales back the settlement, how far do you go? Does it go all the way back to snippet view (in which case I would hope that Google drops the whole thing and fights the fair use issue). Arato noted as well that if the settlement is pared down, then there will be less money. Google should pay a lot less; the absurd attorneys’ fees would have to be reduced; and there may not be enough money to fund and operate the Books Rights Registry (BRR).

Other interesting points from the discussion:

Band: He is more worried about representation on the BRR than in the settlement, since the BRR will be an ongoing group. Because libraries buy primarily scholarly books, the GBS collection will be heavily weighted towards scholarly works. Academic authors, therefore, should be heavily represented on the BRR. CA: Depends on whether the court says that the plaintiffs are adequate representatives. If the settlement is narrowed to impact less the orphan works and foreign representatives, then the adequacy of current representatives may go up. New parties are not being added, but maybe they can be added informally.

Raj worried that if DOJ signs off on a revised settlement, all of the other objections may become moot. DOJ needs to be sensitive to this.

Band suggested one way to scale back the scope of the settlement would be to have opt-in for consumer purchase but opt-out for academic market. His preference, however, is to maintain opt-out in its entirety.

Band also noted a point that should never be forgotten: If GBS is a mess, it is because our copyright laws are a mess. It is trying to fix the problem that our current copyright law has unending copyright term extensions. (I would add as well that we stupidly gave up registration and other formalities. If one had to opt-in and frequently renew copyright in order to secure statutory protection, we wouldn’t have any need for GBS.)

I is for Industry

The second session was in theory supposed to consider the commercial impact of GBS. In practice, while there were interesting presentations, none of the speakers were directly involved in publishing industry.

Michael Cairns opened with a recapitulation of his calculations that rather than “millions” of orphan works, as many of the critics are want suggest, there are instead only 580,388 potential orphan works – and the number is likely to be much lower. The full analysis is available on his blog. Cairn’s analysis only looks at English language books, and I would agree with him that the number of English language orphans is much, much lower than hysterical critics indicate. And since Google has said that they will consider commercial availability abroad as well as in the US when considering whether a book is included in the settlement, the number of foreign orphans is also likely to drop precipitously. The very best thing would be for Google to release data on the nature of what it has scanned according to the terms of the settlement so we would have accurate information on the orphan works issue instead of having to rely on Books in Print data (as Cairns does) or on WorldCat records (as Lavoie and Dempsey have done).

Andrew Devore presented a crisp summary of the objections to the settlement that his firm filed on behalf of Arlo Guthrie and others. What started as straight-forward copyright suit has become an extremely audacious change in the framework for the exploitation of digital works. He is deeply concerned that the potential good for digitization comes at an enormous cost to authors. GBS shifts future rights to Google and dramatically limits the rights of authors to control future uses. Google is a search and advertising company. GBS would help cement Google’s monopoly over search. Devore’s clients are also concerned about releasing trademark and publicity rights. He argued that insert works are undervalued in the settlement. And he noted that there is no compensation for or control over non-display uses (though Grimmelmann in his tutorial the day before questioned whether copyright law gives authors any right to these uses.) In general, he worried that GBS will establish a foundation for the digital book industry by granting perpetual rights to Google. I couldn't help but wonder why, if his clients are so worried about these issues, don't they just opt-out of the settlement (rather than trying to scuttle it for everyone else)?

Michael Healy followed with a presentation on the scope and changing nature of the publishing industry. Perhaps his most telling observation: Publishing and razors are comparably-sized industries.

Victor Perlman objected that photographers and illustrators are excluded from the settlement. This is a complaint that I just don’t understand. I have no doubt that once the text is accessible via the settlement, Google would next try to negotiate rights to the content excluded from the settlement (just as I am sure they would want to negotiate with foreign Reproduction Rights Organizations to allow access to the full-text outside of the US). The negotiations with just the too-narrow group of partners has been incredibly difficult; I don’t see how adding more would have helped.

Dan Clancy from Google was put into the position of having to defend Google. Everything, he argued, is changing as we move to a digital economy and the digital world. GBS has become a Rorschach test on how people feel about the future, and how in general privacy will be protected, research uses are enabled, the scope of fair use, etc.

He noted that 97% of the market is for in-print books. There is some latent value in out of print books, but it isn’t much (which is why libraries move books to off-site storage). The cost of clearing rights is much greater than any economic value.

Consequently, the settlement is nowhere close to creating a foundation for the digital books industry. You can’t build this on the 3% of books that no one wants.