The Fedora architecture is an extensible framework for the storage, management, and dissemination of complex objects and the relationships among them. Fedora accommodates the aggregation of local and distributed content into digital objects and the association of services with objects. This allows an object to have several accessible representations, some of them dynamically produced. The architecture includes a generic RDF-based relationship model that represents relationships among objects and their components. Queries against these relationships are supported by an RDF triple store. The architecture is implemented as a web service, with all aspects of the complex object architecture and related management functions exposed through REST and SOAP interfaces. The implementation is available as open-source software, providing the foundation for a variety of end-user applications for digital libraries, archives, institutional repositories, and learning object systems.

To really make the economics of large-scale open education work, we need to minimize or even completely eliminate the traditional teacher-student interaction in favor of learner interaction with digital content and software. This suggests to me that for the foreseeable future, while open education offerings may substantially increase in quality, they will continue to lag behind the best educational offerings ﬁ nanced under more traditional models. And we should also recognize that the Internet is greatly broadening access to both informational resources and educational opportunities that are not open along with those that are, at least in relatively wealthy nations.

It is important, then, to explore the relationship between information access and access to education in the digital world, consider how digital libraries and other advanced information technologies may bring us closer to the goal of high-quality open education, and understand some of the gaps and shortcomings in potential.

In the slipstream of the IFLA World Library and
Information Congress in Milan, the IFLA Professional Committee organized a special one day conference on digital libraries. This conference, entitled Digital Library Futures: User Perspective and Institutional Strategies, took place on Tuesday 25th August, and was held in the University of Milan. It was co-organized and generously sponsored by the Italian Government through the Athena Project. Because of the rather unusual context in which this event came into being, participation was limited to invited guests only. But IFLA would not have been IFLA if the outcomes of the conference didn’t have a follow up in the general Milan congress programme, to share the outcomes with as many interested IFLA colleagues as possible. This was effectuated at the Plenary Session of Wednesday the 26th August. This contribution provides a report on the history, the outcomes and the follow up of this specific event.

One of the things that surprises me most about reactions to the Google Library Project is that smart people whom I respect seem to think that the only reason that a university library would be involved with Google is because, in some combination, its leadership is stupid, evil, or at best intellectually lazy. To the contrary, although I may be proved wrong, I believe that the University of Michigan (and the other partner libraries) and Google are changing the world for the better. Four years from now, all seven million volumes in the University of Michigan Libraries will have been digitized – the largest such library digitization project in history. Google Book Search and our own MBooks collection already provide full-text access to well over a hundred thousand public domain works, and make it possible to search for keywords and phrases within hundreds of thousands more in-copyright materials. This access is altering the way that we do research. At least as important, the project is its

I am very glad that Paul has entered this conversation. His commitment to revolutionizing the role of the public university in the information ecosystem is inspiring. I am, however, troubled by this claim: "We have a generation of students who will not find valuable scholarly works unless they can find them electronically." Is this true? It's certainly no more true of my students than it was of my peers in the 1980s. Where is the evidence? Sadly, Paul does not actually address the real-world consequences of the Google project:

[This is a reposting of a comment I made in response to Siva Vaidhyanathan's questions about my previous post. I am traveling, and can only produce brief answers to his questions now. Later this week I'll get to most of the issues in more detail here.] Let me start by reminding everyone that I do not speak for Google, nor am I engaged in generalized cheerleading on Google’s behalf. Rather, I am arguing that the University of Michigan Library is doing a Good Thing in its digitization project with Google.

The second installment of the Beta Sprint presentation videos includes Government Publications —Enhanced Access & Discovery through Open Linked Data & Crowdsourcing, Metadata Interoperability Services, and ShelfLife/Library Cloud.

We received nearly 40 submissions to our summer 2011 Beta Sprint, an open call for code and concepts defining how the DPLA should operate. After careful deliberation, the Beta Sprint Review Panel recommended six Beta Sprint submissions to present at our October 2011 plenary meeting in Washington, DC, and an additional three Beta Sprint submissions to present “Lightning Round” talks.

The DPLA Steering Committee initially established 5 workstreams: Content and Scope; Financial/Business Models; Governance; Legal Issues; and Technical Aspects for our initial planning phase. Community members have suggested a sixth research track, which has since been added into the DPLA effort: Audience and Participation. Please feel free to contribute to any of these research tracks and propose new ones. The Steering Committee will consider adding new workstreams for the next planning phases based on suggestions here on the wiki.

The Technical Guidelines for Digitizing Cultural Heritage Materials represents shared best practices followed by agencies participating in the Federal Agencies Digitization Guidelines Initiative (FADGI). This document draws substantially on the National Archives and Records Administration’s Technical Guidelines for Digitizing Archival Records for Electronic Access: Creation of Production Master Files – Raster Images (June 2004), but has been revised and updated in several areas to reflect the current recommendations of the working group and to reflect changes that have occurred in the digitization field during the last five years.

In this paper, we describe a design for a digital library collection service. The collection service is an independent mechanism for introducing structure into a distributed information space. Due to its independence from other services and mechanisms in the digital library, the collection service neither constrains other organizational models nor does it impose structure when it is neither needed nor desired.

The motivation for the collection service design lies in traditions well established in the library community, where collection development serves three important roles:

Selection - defining a set of resources that are members of the collection. These may be all the resources in the library (the collection of the Cornell University Library) or a subset of the total resources (the South East Asia Collection of the Cornell University Library). In the traditional library setting, selection usually implies physical containment or demarcation (e.g., a special set of shelves or room in the library where the members of the collection reside).

Specialization - designating a set of resource discovery aids or cataloging techniques, which are tailored to the characteristics of the collection or the audience to which the collection is targeted.

Administration - establishing a set of management and preservation policies that conform to the collection characteristics.

This article explores reasons for these developments and the influence of key players, while speculating on future directions. We find that the term 'digital library' is used in two distinct senses. In general, researchers view digital libraries as content collected on behalf of user communities, while practicing librarians view digital libraries as institutions or services. Tensions exist between these communities over the scope and concept of the term 'library'. Research-oriented definitions serve to build a community of researchers and to focus attention on problems to be addressed; these definitions have expanded considerably in scope throughout the 1990s. Library community definitions are more recent and serve to focus attention on practical challenges to be addressed in the transformation of research libraries and universities.

This essay endeavors—like the “Digitization Matters” forum that inspired it (see Appendix)— to challenge its audience to take a fresh look at approaches to extending access to the special collections in libraries, archives, and museums. The forum speakers were asked to be provocative, not to represent what they or their institutions have done, but to focus on ideas for significantly increasing the scale of our digitization activities. ... The essay, like the forum, focuses on digitization and related processes, but intentionally does not encompass technical specifications for various formats, born digital materials, nor rights issues (which warrant similar essays for each topic). It intends to be provocative. Not all of the ideas presented here will apply to a particular situation, but hopefully they will stimulate consideration of appropriate ways to move forward.

...the Digital Library Federation (DLF) undertook a study of its members’ digital library programs. The survey was intended to document how DLF member libraries are focusing their digital library programs: how and under what circumstances their programs were initiated; the influences that shaped their development; the programs’ current organization and funding; and the challenges they anticipate. The primary aim of the study was to help inform libraries in their strategic planning and help them assess their own programs in light of what others have set out to achieve. The study had a number of secondary aims as well; for example, to identify what new roles are emerging for academic libraries; to assess the opportunities and pitfalls that may be associated with these new roles; and to help libraries promote themselves to their faculties and to the university administrators to whom they report.

The Project Management Body of Knowledge (PMBOK®) is an inclusive term that describes the sum of knowledge within the profession of project management. As with other professions such as law, medicine, and accounting, the body of knowledge rests with the practitioners and academics that apply and advance it. The full project management body of knowledge includes knowledge of proven traditional practices that are widely applied, as well as knowledge of innovative and advanced practices that have seen more limited use, and includes both published and unpublished material.

Description of document: This document provides ready reference for the Dublin Core Metadata Element Set, Version 1.1. For more detailed documentation and links to historical versioning information, see the document "DCMI Metadata Terms".

This issue of Communications highlights some of the many projects underway for the creation or enhancement of digital libraries. At the moment, no one seems to think there will be only one gargantuan digital library to sate the public's appetite for information. Rather, the expectation is that there will be many digital libraries, most of which will have specialized collections and will be internetworked together in a way loosely resembling today's Internet. Most digital library project planners are aware there are intellectual property issues that must be resolved in order to successfully deploy their libraries. Some proposals for digital library projects express an intent to resolve intellectual property issues as part of the overall plan for the library, albeit without much specificity about how this would be achieved in their systems [2, 4].

DSpace is an open source software platform that enables organisations to: * capture and describe digital material using a submission workflow module, or a variety of programmatic ingest options * distribute an organisation's digital assets over the web through a search and retrieval system * preserve digital assets over the long term This system documentation includes a functional overview of the system, which is a good introduction to the capabilities of the system, and should be readable by non-technical folk.

Much progress has been made in aligning library services with changing (and increasingly digital and networked) research and learning environments. At times, however, this progress has been uneven, fragmented, and reactive. As libraries continue to engage with an ever-shifting information landscape, it is apparent that their efforts would be facilitated by a shared view of how library services should be organized and surfaced in these new settings and contexts.

The Metadata Encoding and Transmission Standard (METS) is a data encoding and transmission specification, expressed in XML, that provides the means to convey the metadata necessary for both the management of digital objects within a repository 1 and the exchange of such objects between repositories (or between repositories and their users). This common object format was designed to allow the sharing of efforts to develop information management tools/services and to facilitate the interoperable exchange of digital materials among institutions (including vendors). The METS XML schema was created in 2001 under the sponsorship of the Digital Library Federation (DLF), is supported by the Library of Congress as its maintenance agency, and is governed by the METS Editorial Board. In 2004 it received NISO Registration, which was renewed in 2006.

The Digital Public Library of America (DPLA) will make the cultural and scientific heritage of humanity available, free of charge, to all. By adhering to the fundamental principle of free and universal access to knowledge, it will promote education in the broadest sense of the term. That is, it will function as an online library for students of all ages, from grades K-12 to postdoctoral researchers and anyone seeking self-instruction; it will be a deep resource for community colleges, vocational schools, colleges, universities, and adult education programs; it will supplement the services of public libraries in every corner of the country; and it will satisfy other needs as well—the need for data related to employment, for practical information of all kinds, and for enrichment in the use of leisure.

The objectives of the Duke Digital Collections Program are: * To create digital collections that are distinctive in terms of their content and/or the means of access they provide to their content; * To provide digital access to library and archival materials at Duke, especially materials that reflect strengths in the Libraries’ collections and that are useful for teaching, learning, and research at Duke and elsewhere; * To transform unique teaching and research materials of broad value held by Duke faculty members, departments, and programs into digital collections that are searchable and accessible over the Web; * To reformat text, image, sound, and moving image collections that are not readily accessible in their current format or would be damaged by use in that format; * To contribute collaboratively to national and international digital collections initiatives that benefit Duke and the larger research community.

Usability in digital libraries is often focussed on end-user interactions such as searching and browsing. In this chapter we describe usability issues that face the digital librarian in creating and maintaining a digital library. The Greenstone digital library software suite is used as an example to examine how to support digital librarians in their work.

The conception and work of libraries have always been shaped by the external representations of thought manifest in physical media of expression. Whether stone tablets, books, or magnetic recordings, libraries serve to preserve these objects and to provide access to the thought carried in such objects. New media of expression will surely affect the conception and work of libraries. These changes are not supplantations as much as augmentations in the continuing evolution of thought and communication. Theories of information science seek to explain these augmentations and postulate new possibilities for theory and practice. This paper presents a model of digital library evolution that augments service by facilitating community-based sharing of time and information. In the sharium model, people and their interactions are as important as information resources. Digital library content and tools serve as the environment to bring people together for problem solving and intellectual exchange.

The rapid changes in the means of information access occasioned by the emergence of the World Wide Web have spawned an upheaval in the means of describing and managing information resources. Metadata is a primary tool in this work, and an important link in the value chain of knowledge economies. Yet there is much confusion about how metadata should be integrated into information systems. How is it to be created or extended? Who will manage it? How can it be used and exchanged? Whence comes its authority? Can different metadata standards be used together in a given environment? These and related questions motivate this paper.

About Historic Overlay Maps Selected maps from the North Carolina Maps project can be viewed as Historic Overlay Maps, layered directly on top of current road maps or satellite images. By fading or "seeing through" the historic maps, users are able to compare the similarities and differences between old and new maps, and to study the changes in North Carolina over time. Using Historic Overlay Maps The Historic Overlay Maps are presented with a historic map placed on top of a current Google street map. The historic map has been geo-referenced, meaning that it should line up very closely with the current map. Using the Google map interface and check boxes to the left of the map, users can zoom in and out, move the images around, turn the historic map image on and off, or fade the historic map image. Please see the Instructions page for specific directions. Select historic maps are available for download in a format suitable for use in Geographic Information Systems (GIS) software.

The Duke University Libraries Digital Collections Program builds distinctive digital collections that provide access to Duke's unique library and archival materials for teaching, learning, and research at Duke and worldwide. We contribute collaboratively to national and international digital collections initiatives that benefit Duke and the larger research community.

As an open source, not-for-profit, warm-and-fuzzy, community service oriented project, we don’t normally like to talk about market rivals or competitive products when we talk about Omeka. Nevertheless, we are often asked to compare Omeka with other products. "Who’s Omeka’s competition?" is a fairly frequent question. Like many FAQs, there is an easy answer and a more complicated one.

For democracy to flourish, citizens need free and open access to ideas. In today's digital age, this means access to information and ideas online. In the face of dramatic consolidation in the media industry and new laws that increase its control over intellectual products, the emerging concept of the information commons offers new ways for producing and sharing information, creative works, and democratic discussion. The fifth in FEPP's series of detailed policy reports, The Information Commons is the first comprehensive, easy-to-read summary of a new movement that offers exciting alternatives to today's increasing restrictions on access to information, scholarly research, and other resources so necessary for democracy. ...the report gives an overview of the problem of enclosure, explains how theories of the commons have been adapted to the information age, and describes dozens of flourishing information communities...

Professor Eric Faden of Bucknell University created this humorous, yet informative, review of copyright principles delivered through the words of the very folks we can thank for nearly endless copyright terms.

If a digital library project is to be successful, the project needs to be run in a professional manner, using project management techniques. This article points out some of the most important aspects of project management such as understanding the project requirements, the role of planning, accurately determining budget and schedule, controlling the scope of the project, and developing expertise. In order to accomplish this, the project manager needs to be a multifaceted leader as well as technically adept.

Documenting the American South utilizes Google Analytics to gather website usage statistics and to create monthly usage reports. Our reports (which can be linked to) include the statistics defined below.

In order to define a new research program for the “post-DL” era, one that builds upon and integrates the existing work on digital libraries as well as the enormous secular changes that have taken place in human and societal behaviors and aspirations within the context of the pervasive deployment and continuing progress in information technology and networked information, one useful strategy may be to step back. Rather than considering how to re-design or recreate or enhance libraries as digital libraries we might usefully focus our attention on the human and social purposes and needs that libraries and allied cultural memory institutions have been intended to address – recognizing that they are not the exclusive agencies addressing such purposes and needs, and recognizing further that there are closely-related information management purposes and needs both long-standing and newly-emergent that have not been satisfactorily addressed by cultural memory organizations...

In 1994, we at the University of North Carolina at Chapel Hill Library undertook the experimental digitization of a small number of highly circulating slave narratives.When making that handful of encoded texts public, we had little idea how the project would grow and flourish, or how deeply it would affect our readers and shape our own vision of the digital library. Now, eight years later, we add the thousandth full-text title to the collection that has since taken shape as Documenting the American South (DAS), marking the occasion with a celebration and symposium here at Chapel Hill on March 1, 2002.This booklet has been prepared as background for that event, its contents culled from more than 1,500 email messages sent to the DAS website between January 1999 and June 2001.

As a National Science Foundation (NSF) program, the NSDL is reaching maturity, but the library is already forging a strong link among research projects, which argues compellingly for continued NSF investment, although with new directions.

This project examined the economics of the production, storage, and distribution of information in print, microfiche, and digital format for the Early Canadiana Online Project. The Early Canadiana Online Project digitized over 3,300 titles and over 650,000 images of the Canadian Institute of Historical Microreproductions collection of pre-1900 print materials published in Canada. An economic model was developed of the stakeholders—-publishers, libraries, and patrons—-and the costs to stakeholders of the three formats—-print, microfiche, and digital. A detailed cost analysis was performed to estimate the costs of each of the three formats. The results of this cost analysis can be used as benchmarks for estimating the costs of other digitization projects. The analysis shows that digital access can be cost-efficient so long as there are a number of libraries that receive sufficient benefit such that they are willing to share the costs of digitization and access.

The North Carolina Collection Photographic Archives (NCCPA) has developed a digitization program specifically designed to provide researchers access to NCCPA materials that have been digitized in recent months and years. This new feature is now available with finding aids for selected collections in the NCCPA. Clicking on a collection name will take you to that collection's finding aid (a descriptive guide to the collection's contents). To search for other NCCPA collections or to gain access to other digitized collections in UNC-Chapel Hill's University Library, please use the online catalog.

Due to the changing nature of librarianship resulting from the increasing amount of information available in digital format, educating digital librarians has become an important agenda within library and information science schools. To design and offer appropriate courses and teaching approaches for training competent digital librarians, educators can benefit from feedback provided by current practitioners in order to accurately determine what skills and knowledge are really required for digital librarians to be effective in the digital work place. To that end, we surveyed current digital library professionals in academic libraries in the United States to identify their activities and skills and to detect any gaps in their training. We analyzed input from the survey responses to learn more about the nature of digital library work practices and to identify common and necessary attributes (knowledge and skills) required of "digital librarians." ...

“Extending the Reach of Southern Sources: Proceeding to Large-Scale Digitization of Manuscript Collections” is a project undertaken by the Southern Historical Collection (SHC), funded through a grant from the Andrew W. Mellon Foundation. The grant enabled the SHC staff to study the feasibility of digitizing the SHC's collections and to plan a long-term digitization program. Housed at the Louis Round Wilson Special Collections Library, the SHC encompasses more than 4,600 individual collections comprised of millions of individual documents. Through this project a “decision matrix” was developed for selecting and prioritizing the massive holdings of the SHC for digitization, and the Digital Southern Historical Collection was planned.

Millions of books from the UC Libraries have been digitized, but how? Go behind the scenes to learn about the UC Libraries’ digitization process and see several ways you can use these newly digital books. “The Story of the Digital Book” explains how our books make their way from the shelf to the screen, the possibilities they bring to users, and how they’re preserved for the long term.

This study examined job announcements published between 1990 and 2000 in College & Research Libraries News containing either the word “electronic” or “digital” in the position title. These positions tended to contain similar responsibilities, primarily because both use technology to enhance access to information. However, there were important differences.

The Core Integration task is to ensure that the NSDL is a single coherent library, not simply a set of unrelated activities. In summer 2000, the NSF funded six Core Integration demonstration projects, each lasting a year. One of these grants was to Cornell University and our demonstration is known as Site for Science. It is at http://www.siteforscience.org/ [Site for Science]. In late 2001, the NSF consolidated the Core Integration funding into a single grant for the production release of the NSDL. This grant was made to a collaboration of the University Corporation for Atmospheric Research (UCAR), Columbia University and Cornell University. The technical approach being followed is based heavily on our experience with Site for Science. Therefore this article is both a description of the strategy for interoperability that was developed for Site for Science and an introduction to the architecture being used by the NSDL production team.

For the past two years, the Computer Science Technical Reports project (CS-TR) has been developing an architecture for a digital library with funding from the Department of Defense's Advanced Research Projects Agency (ARPA). This is a general purpose framework for a digital library in which very large numbers of objects, comprising all types of material, are accessible over national computer networks. It is described in a paper by Robert Kahn and Robert Wilensky (cnri.dlib/tn95-01). This architecture has been the subject of a series of useful discussions from which eight general principles have emerged; they are discussed in this introduction. These principles form the key issues in the transition to a true digital library from the network services that we have today. The Kahn/Wilensky paper also contains a comprehensive framework for resolving these issues.

Purpose – The purpose of this study is to examine the extent to which digital library projects incorporated reference services to increase the value of the collections and support the use of information. Design/methodology/approach – After defining digital library service types, the study surveyed 60 digital collections/projects from the Digital Initiatives Database (DID) and analyzed what types of services have been offered and how they varied. Findings – Findings showed that digital collections scored high marks in offering services in two areas – search and digital reference; however, the findings also revealed that they have been limited in giving valuable information services in other areas. Originality/value – The study shows that the current practice of digital initiatives needs to integrate various services not only to help users find information, but also to instruct users to better utilize the library and its other services.

The post-Ajaxian Web 2.0 world of wikis, folksonomies, and mashups makes well-planned information architecture even more essential. How do you present large volumes of information to people who need to find what they're looking for quickly? This classic primer shows information architects, designers, and web site developers how to build large-scale and maintainable web sites that are appealing and easy to navigate.

Advances in information technology have dramatically changed information seeking, and necessitate an examination of traditional conceptions of library collection. This article addresses the task and reveals four major presumptions associated with collections: tangibility, ownership, a user community, and an integrated retrieval mechanism. Some of these presumptions have served only to perpetuate misconceptions of collection. Others seem to have become more relevant in the current information environment. The emergence of nontraditional media, such as the World Wide Web (WWW), poses two specific challenges: to question the necessity of finite collections, and contest the boundaries of a collection. A critical analysis of these issues results in a proposal for an expanded concept of collection that considers the perspectives of both the user and the collection developer, invites rigorous user-centered research, and looks at the collection as an information-seeking context.

The research library is undergoing foundational change. Recent years have seen major advances in the understanding of service needs and systems to support them. However, the research library community has not yet transitioned to a shared understanding of how a library and its services are organized in an increasingly networked environment. In response to this need, DLF established the DLF Services Framework Working Group in November 2004. A brief report of their work through May 2005 is available in HTML or PDF format. Lorcan Dempsey, OCLC Online Computer Library Center, prepared a PowerPoint presentation for the DLF Steering Committee, which can be viewed from this link.

A good case statement must be one that grabs a prospective donor's attention, and then offers a solid reason for investing in a program. According to Jerold Panas in his book Making the Case, a case statement must have eight essential elements. Panas cautions that these elements will not necessarily show up as separate items and that they may overlap or even be repeated. Nevertheless, they must be present.

Fair use is a copyright principle based on the belief that the public is entitled to freely use portions of copyrighted materials forpurposes of commentary and criticism. For example, if you wish to criticize a novelist, you should have the freedom to quote a portion of the novelist's work without asking permission. Absent this freedom, copyright owners could stifle any negative comments about their work.