Friday, December 28, 2012

There is a Google+ Community for RDA Cataloging, a place to share ideas. For example, currently there is a template for a corporate body name.

RDA Cataloging is an online community/group/forum for library and information science students, professionals and cataloging & metadata librarians. It is a place where people can get together to share ideas, trade tips and tricks, share resources, get the latest news, and learn about Resource Description and Access (RDA), a new cataloging standard to replace AACR2

Wednesday, December 19, 2012

I think we can agree that Santa would use sound data management practices, including the creation and use of proper metadata, to keep track of his gift giving and logistical data. He would want the rest of us to use good metadata so we can always locate that 30 year old picture of him, too.

Be like Santa and make sure your data is findable and re-useable: use good metadata!

Another major cataloging institution, the GPO, plans to switch to the RDA standard in April.

The U.S. Government Printing Office (GPO) has created a Resource Description and Access (RDA) implementation team to ensure a smooth transition from AACR2 to RDA. GPO cataloging staff are continuing their training efforts, and are now working on sample record creation, the identification of local practices and formal PCC (Program for Cooperative Cataloging) review. Full implementation is expected in April, 2013.

Friday, December 07, 2012

NISO will hold its next open teleconference in our monthly series this coming Monday, December 10th at 3:00 PM Eastern Standard Time.

The topic for the November call will be NCIP (NISO Circulation Interchange Protocol, also known as Z39.83), which is a North American standard with implementations in the US, Canada, and many other countries around the world. NCIP services facilitate the automation of tasks, the exchange of data, the ability to provide information to library staff, and the empowerment of patrons. Each service is comprised of a request from an initiating application and a reply from a responding application. It is possible for a single software application to play both the initiation and responding roles, but typically there are at least two applications involved.

The Standard itself consists of two parts; Part 1: Protocol defines a protocol that is limited to the exchange of messages between and among computer-based applications to enable them to perform the functions necessary to lend and borrow items, to provide controlled access to electronic resources, and to facilitate co-operative management of these functions. Part 2: Implementation Profile defines a practical implementation structure for NCIP. Version 2.02 of these documents was published in 2012 and is available via http://www.niso.org/workrooms/ncip. The Standing Committee maintains further informational pages at http://www.ncip.info/

Mike Dicus, Product Manager at Ex Libris and chair of the NCIP Standing Committee, will participate on the teleconference to discuss the group's work and answer any questions.

The call is free and anyone is welcome to participate. To join, simply dial 877-375-2160 and enter the code: 17800743#. All calls are held from 3-4 p.m. Eastern time.

Thursday, December 06, 2012

searchRetrieve Version 1.0 is a multi-part specification that defines a generic protocol for the interaction required between a client and server for performing searches.

Part 1, the Abstract Protocol Definition (APD) defines a model and a generic protocol for the interaction between a client and server for performing searches. It facilitates interoperability between different search protocols by providing a common framework and terminology for describing these search protocols. The intention is that all search protocols can be regarded as concrete implementations of this definition.

The ALA-LC Romanization tables are developed jointly by the Library of Congress (LC) and the American Library Association (ALA). Romanization schemes enable the cataloging of foreign language materials. Romanized cataloging in turn supports circulation, acquisitions, serials check-in, shelflisting, shelving, and reference, particularly in library catalogs that are unable to display non-roman alphabet information.

The ALCTS Committee on Cataloging: Asian and African Materials (CC:AAM) recently received and reviewed a proposal to revise the Japanese romanization table. The table has subsequently been approved. The revised Japanese romanization table is now available for downloading from the ALA-LC Romanization Tables webpage http://www.loc.gov/catdir/cpso/roman.html.

It has always been one of the main functions of the library catalogue to relate resources to other entities. Main and added entries express relationships between persons or organizations and the resources for which they are responsible, and other devices such as analytical added entries, uniform titles, linking entries, and series headings are all ways of expressing defined sets of relationships among resources themselves.

The relationship designators in RDA should be seen as an evolution of these devices. But where MARC captures a relatively limited set of relationships, largely those applicable to traditional library collections, the RDA relationship designators establish a framework to express a potentially much richer set of relationships. In addition, the linked data environment in which RDA relationship designators are intended to be implemented may eventually offer more powerful ways of handling relationships.

Tuesday, December 04, 2012

I'm going off topic to warn everyone of Allegiance Construction Group of Houston Texas. This is a new name for them. They used to be AF Construction. Under that name they received a F rating from the Better Business Bureau. Under that name they were also dropped by Service Magic. They changed names and placed ownership of the new company in the name of the wife of the former owner of AF Construction. They very well may have used other names before AF Construction. Reynaldo Hernandez (Ray Hernandez) has a office on 10030 Blackhawk Blvd Houston, TX 77089. If you find any of this matches someone you are considering hiring find someone else.

Thursday, November 29, 2012

The Policy and Standards Division of the Library of Congress has received a revision proposal for the Malay (in Jawi-Arabic script) ALA-LC romanization table from LC's Jakarta Office. Based on feedback from current users, a couple of additional changes to the table have been made, and the table has been renamed "Jawi / Pegon romanization table". The table's name change acknowledges that it covers more than simply the Malay language.

Wednesday, November 28, 2012

Monday, November 26, 2012

News from the Network Development and MARC Standards Office, Library of Congress.

A new set of MODS 3.4 XSLT 2.0 stylesheets has been added to our MODS Conversions page http://www.loc.gov/standards/mods/mods-conversions.html for community testing and review. These new XSLT 2.0 stylesheets are based on the mappings made available by the Library of Congress on the same page.

The Library of Congress officially launched its Bibliographic Framework Initiative in May 2011. The Initiative aims to re-envision and, in the long run, implement a new bibliographic environment for libraries that makes "the network" central and makes interconnectedness commonplace. Prompted in no small part by the desire to embrace new cataloging norms, it is essential that the library community redevelop its bibliographic data models as part of this Initiative. Toward that objective, this document presents a high-level model for the library community for evaluation and discussion, but it is also important to consider this document within a much broader context, and one that looks well beyond the library community....

The new, proposed model is simply called BIBFRAME, short for Bibliographic Framework.
The new model is more than a mere replacement for the library community's current model/format, MARC. It is the foundation for the future of bibliographic description that happens on, in, and as part of the web and the networked world we live in. It is designed to integrate with and engage in the wider information community while also serving the very specific needs of its maintenance community - libraries and similar memory organizations.

Anyone who is interested in experimenting with the DPLA—from creating apps that use the library’s metadata to thinking about novel designs to bringing the collection into classrooms—is welcome to attend or participate from afar. The hackfest is not limited to those with programming skills, and we welcome all those with ideas, notions, or the energy to collaborate in envisioning novel uses for the DPLA.

The Center for History and New Media will provide spaces for a group as large as 30 in the main hacking space, with couches, tables, whiteboards, and unlimited coffee. There will also be breakout areas for smaller groups of designers and developers to brainstorm and work. We ask that anyone who would like to attend the hackfest please register in advance via this registration form.

There have been many complaints (e.g. https://bugzilla.wikimedia.org/show_bug.cgi?id=19262) that articles take too long to render. For articles with many citations, the obvious low-hanging fruit is COINS metadata. For example, Muammar Gaddafi takes 28.3 seconds to parse, but with COINS removed, it takes 22.2 seconds.
Nobody ever held a straw poll asking the community "can we please make article parsing 27% slower in order to support a rarely-used metadata feature?" I'm sure that data can be provided in some other way. So I would like to remove it. -- Tim Starling (talk) 06:10, 12 November 2012 (UTC)

Users of Zotero and LibX have raised objections. COinS may be brought back when Wikipedia does some system improvements. If you have any feelings on the matter (COinS are necessary, should be replaced by a better schema, etc.) join the discussion.

Friday, November 16, 2012

NISO will hold its next open teleconference in our monthly series this coming Monday, November 19th at 3:00 PM Eastern Standard Time.

The topic for the November call will be the ResourceSync initiative, which is a joint NISO and Open Archives Initiative (OAI) project to research, develop, prototype, test, and deploy mechanisms for the large-scale synchronization of web resources. More information on this Working Group can be found at http://www.niso.org/workrooms/resourcesync/.

ResourceSync builds on the OAI-PMH strategies for synchronizing metadata; this project will enhance that specification using modern web technologies, and will allow for the synchronization of the objects themselves, not just their metadata. The Web is highly dynamic, with resources continuously being created, updated, deleted, and moved. Web applications that leverage third party resources face the challenge of keeping in step with this rate of change. Many such applications are not concerned with accurate coverage of a server's resources or consider delays in reflecting changes acceptable. In these cases, alignment with the dynamics of a remote server is commonly achieved by optimizing web crawling and resource discovery mechanisms, for example through scheduling crawls based on change frequency prediction. However, there are significant use cases that require more real-time and accurate synchronization.

CONA is a structured vocabulary containing authority records for cultural works, including architecture and movable works such as paintings, sculpture, prints, drawings, manuscripts, photographs, textiles, ceramics, furniture, other visual media such as frescoes and architectural sculpture, performance art, archaeological artifacts, and various functional objects that are from the realm of material culture and of the type collected by museums. The focus of CONA is works cataloged in scholarly literature, museum collections, visual resources collections, archives, libraries, and indexing projects with a primary emphasis on art, architecture, or archaeology.

The focus of each CONA record is a work of art or architecture. In the database, each work's record (also called a subject in the database, not to be confused with iconographical depicted subjects of art works) is identified by a unique numeric ID. Linked to each work's record are titles/names, current location, dates, other fields, related works, a parent (that is, a position in the hierarchy), sources for the data, and notes. The coverage of CONA is global, from prehistory through the present. Names or titles may be current, historical, and in various languages.

CONA grows through contributions, if your institution has such works consider contributing to the database.

The ALA-LC Romanization tables are developed jointly by the Library of Congress (LC) and the American Library Association (ALA). Romanization schemes enable the cataloging of foreign language materials. Romanized cataloging in turn supports circulation, acquisitions, serials check-in, shelflisting, shelving, and reference, particularly in library catalogs that are unable to display non-roman alphabet information.

The ALCTS Committee on Cataloging: Asian and African Materials (CC:AAM) recently received and reviewed a proposal to revise the Arabic romanization table. The table has subsequently been approved.

Using technology and helping people along the way have been constant throughout my career. It will be interesting to see how new standards such as RDA and FRBR will play out. Ultimately, we are all working to make it better for individuals to more easily search and find what they want and need. People will continue to become more accustomed to finding and retrieving the actual documents. Through all aspects of our interactions, it is becoming the expectation that there will be something tangible at the end of the line with each cast for information. I see this even with my two granddaughters. They are growing up to expect that a telephone call is a visual one. Anything else is unacceptable. This is an exciting time for librarians—ripe with opportunities and challenges to meet these very high expectations.

The Webinar, RDA: Are We There Yet? presented by Emily Dust Nimsakont, November 14, 2012 is now available on the archive site, if you happened to miss it.

It's been a long time coming, but Resource Description and Access (RDA), the new cataloging code, will be implemented by the Library of Congress next year. Are you ready? In this session, Emily Dust Nimsakont provides an update on the latest RDA-related developments and offer tips for RDA implementation. Emily Dust Nimsakont is the Government and Information Services Librarian at the Nebraska Library Commission. She previously held the position of Cataloging Librarian at the NLC. She holds a Master's degree in Library Science from the University of Missouri-Columbia, as well as a Master's degree in Museum Studies from the University of Nebraska-Lincoln

Monday, November 12, 2012

A nice looking beta number building / user contribution tool in WebDewey was recently made available.

Earlier today, the beta version of the number building / user contribution tool was installed in WebDewey. This new WebDewey feature assists users in building numbers, assigning index terms to the resulting numbers, keeping the numbers as personal or sharing the numbers plus index terms with others at the same institution, and contributing the numbers plus index terms back to the Dewey user community.

You’ll notice a new box, Create built number, associated with the record for any valid schedule or table entry.

Thursday, November 08, 2012

All Library of Congress systems will be taken offline beginning Friday evening. This includes LCCN Permalink, Z39.50 and SRU services, ID.LOC.GOV, all listservs, and, of course, the catalog. All Library systems. Service will be restored by Tuesday.

The Library of Congress has planned extensive electrical work and power maintenance for this coming weekend. As a protective measure, all Library systems will be powered down. The maintenance period is scheduled for completion by Tuesday morning, when it is expected all Library systems will have been restored to normal operation. Though it is anticipated work will not be fully completed until late Monday (or very early Tuesday morning), services will be start coming back online many hours before then.

Wednesday, November 07, 2012

Our bibliographic exchange ecosystem is incredibly complex. The contributors to this process are numerous and occasionally have competing interests. Beyond this, the metadata that we need to discover content travels a circuitous route through our information community involving a variety of organizations and providers. Much of this exchange, at least within the library community, is centered around antiquated formats that need to be transformed to interoperate with modern information exchange systems. Designing and achieving this transformation will require a great deal of collaboration and consensus among all the affected stakeholders.

Tuesday, November 06, 2012

The ALA-LC Romanization tables are developed jointly by the Library of Congress (LC) and the American Library Association (ALA). Romanization schemes enable the cataloging of foreign language materials. Romanized cataloging in turn supports circulation, acquisitions, serials check-in, shelflisting, shelving, and reference, particularly in library catalogs that are unable to display non-roman alphabet information.

The ALCTS Committee on Cataloging: Description and Access (CC:DA) recently received and reviewed a proposal to revise the Belarusian romanization table. The table has subsequently been approved.

OCLC is pleased to announce to our cataloging members that additional functionality has been added to the Expert Community to enable upgrading of Cataloging in Publication (CIP) records by member libraries, even when they are coded “pcc” in the 042 field.

OCLC has previously excluded all records that were coded as being Program for Cooperative Cataloging (PCC BIBCO records) from Expert Community replaces. Library of Congress CIP records (DLC Encoding Level 8 records) were not being coded as “pcc” at the time the Expert Community began, but are currently routinely coded in this manner. Not being able to permanently upgrade master records in WorldCat for LC CIP has long been a source of frustration for catalogers. OCLC has heard this frustration and is responding by adding new functionality to enable upgrading of CIP. Records coded as “pcc” with other encoding levels continue to be excluded from Expert Community replaces.

Beginning today, November 5, 2012, catalogers using full level (or higher) OCLC cataloging authorizations will be able to edit/upgrade all fields in LC CIP records that may be edited in other non-pcc master records with one exception. That exception is that the encoding level coding may not be changed. It will remain “8” until an official CIP upgrade is loaded to WorldCat from LC, from a CIP upgrade partner, or is changed by an institution with National Level Enhance authorization. The entire record may be upgraded as needed, including description and subject cataloging; only the encoding level may not be changed. When upgrading a CIP record, never remove correct and accurate information from a master record simply because your institution does not find it useful. This includes LC or Dewey Decimal classification numbers, LC or other subject headings, or other useful fields such as summaries or table of contents information.

Using a full level authorization, catalogers may lock, edit, and then replace the LC CIP records when using Connexion Browser or Client. When using the Client, catalogers may just edit and replace without the first step of “lock” if desired, to upgrade LC CIP.

Monday, November 05, 2012

The search engine BASE (Bielefeld Academic Search Engine) It is a tool for searching Open Access contents and automatic classification is active since 2004 and includes 37,4 million documents from 2900 mostly academic repositories. Special features are a truncation function, a search history, sorting, a drilldown function and the cooperation of linguistic tools. Interoperability is guaranteed through the possibility to bind BASE to different interfaces.

They also provide help on "Integration of BASE into local infrastructures" and a "Validate OAI Interface." You can also suggest a repository of them to harvest. Is your institution's OAI-PMH metadata being harvested?

Wednesday, October 31, 2012

“Hi there folks, a simple request from your friendly neighborhood LJ editor here: help us get the word out about nominations for 2013 class of Movers & Shakers. If you have any outstanding peers and colleagues in mind, go ahead and submit early and often here: http://lj.libraryjournal.com/lj-movers-shakers/

Likewise, if you have any opportunity to spread the word about nominations to your colleagues via Twitter, Facebook, email lists, in-person nudges, etc., that would also be a tremendous help to us in getting a stellar batch of Movers set for the March 15 issue. Nominations are due November 7.

I'm not sure how many nominations LJ gets for these awards. I know that when I was on the Texas Library Association's Award Committee we got very few. The odds of winning if a deserving person was nominated were very good.

Another reason to nominate someone is it is good PR for the library. If you're a manager having a Movers & Shaker on our staff can generate plenty of good press. Just the fact that you think enough of someones performance to make the nomination is good for staff moral. Even if they don't win their (and your library's) exceptional work has been read and reviewed by a committee. That alone is good PR. Maybe it will lead to an invitation to speak or write and a win next year.

What I'm saying is do take the time to send in nominations. There is little to lose and plenty to gain.

Monday, October 29, 2012

The latest update to Terry Reese's MarcEdit tool includes RDA Helper. "The RDA Helper is an in development tool that provides the ability to RDA’ize AACR2 records. The function allows users to select from a broad range of RDA field options. RDA Helper includes automatic GMD generation." Terry has a video on the Helper.

The Office of Library Co-ordination of the Ministry of Education, Culture and Sport (MECD) commissioned the company Infor@rea to draw up a study of the current situation and best practices regarding exchange formats for data to be published in the Registry Service for Libraries and Related Organizations (hereinafter SERBER). This study falls within the framework of the design and implementation of a strategic model for SERBER based on the ISO 2146:2010 standard..... The last two sections outline the recommendations for publishing open data of the SERBER project, expressed in two successive stages: Stage 1 (Section 5), in which the data will be published in two structured formats, CSV and XML; and the Stage 2 (Section 6), marked by the strategic transition to linked data in the context of the Semantic Web. Section 6 presents the specific steps to be carried out and the immediate benefits that would be obtained by the MECD.

Friday, October 26, 2012

Curating for Quality: Ensuring Data Quality to Enable New Science Final Report: Invitational Workshop Sponsored by the National Science Foundation September 10-­11, 2012 Arlington, VA USA.

The National Science Foundation sponsored a workshop on September 10 and 11, 2012, in Arlington, Virginia on “Curating for Quality: Ensuring Data Quality to Enable New Science.” Individuals from government, academic and industry settings gathered to discuss issues, strategies and priorities for ensuring quality in collections of data. This workshop aimed to define data quality research issues and potential solutions. The workshop objectives were organized into four clusters: (1) data quality criteria and contexts, (2) human and institutional factors, (3) tools for effective and painless curation, and (4) metrics for data quality.

Thursday, October 25, 2012

The OLAC Conference organizers came up with a terrific idea, using Dropbox to store and make available the conference handouts.

Find below a link to the dropbox for conference handouts. So far, we have the handouts for the FRBR workshop, the CONSER serials workshop, Digital Images, RDA (plus exercises), Video, and Sound Recording. All of the speakers have asked me to remind you that these slides and handouts may be updated as the conference approaches.

We'll be uploading more as we receive them!

We encourage you to download these slides and handouts to your mobile devices and laptops.

Your workshop assignments will be available in the conference packets when you register. If you registered for the full conference (as opposed to one day) you will received your first choice workshop, if you indicated a single first choice. Most conference goers will receive their top 4, and if not, very likely 4 of the top five. Everyone has access to all the handouts!

Wednesday, October 24, 2012

The ALA-LC Romanization tables are developed jointly by the Library of Congress (LC) and the American Library Association (ALA). Romanization schemes enable the cataloging of foreign language materials. Romanized cataloging in turn supports circulation, acquisitions, serials check-in, shelflisting, shelving, and reference, particularly in library catalogs that are unable to display non-roman alphabet information.

The ALCTS Committee on Cataloging: Asian and African Materials (CC:AAM) recently received and reviewed a proposal for new Tod-Oirat-Old Kalmyk romanization table. The table has subsequently been approved.

OCLC Research announces the availability of assignFAST, a new Web service that automates the manual selection of FAST Subjects (the Authorized and Use For headings) based on autosuggest technology.

Subject assignment is a two-phase task. The first phase is intellectual: reviewing the material and selecting the correct heading. The second phase is more mechanical: finding the correct form of the heading, along with any diacritics; cutting and pasting it into the cataloging interface; and potentially correcting formatting and subfield coding. If authority control is available in the interface, some of these tasks may be automated.

assignFAST consolidates the entire second phase of the manual process of subject assignment into a single step based on autosuggest technology. The service can easily be added to an existing browser based interface, providing both subject selection and authority control in a single step.

Wednesday, October 17, 2012

The ALA-LC Romanization tables are developed jointly by the Library of Congress (LC) and the American Library Association (ALA). Romanization schemes enable the cataloging of foreign language materials. Romanized cataloging in turn supports circulation, acquisitions, serials check-in, shelflisting, shelving, and reference, particularly in library catalogs that are unable to display non-roman alphabet information.

The ALCTS Committee on Cataloging: Asian and African Materials (CC:AAM) recently received and reviewed proposals for new Kazakh (in Arabic script) and Manchu romanization tables, as well as a revision proposal for the Lepcha romanization table. All three tables were approved.

The Georgia Library Association has announced the November 2012 session of the Carterette Series Webinars, "RDA: Are We There Yet?".

The Carterette Series is a bi-monthly educational webinar series highlighting trends, innovation, and best practices in libraries. The free sessions are open to interested parties from all geographic (and astral) locations. Topics are chosen to be of interest to students and employees from all library types, and each session is approved for one Georgia Continuing Education (CE) contact hour.

Can't make it to the live show? That's okay. The sessions will be recorded and available on the CSW site for later viewing.

It's been a long time coming, but Resource Description and Access (RDA), the new cataloging code, will be implemented by the Library of Congress next year - are you ready? In this session, Emily Dust Nimsakont will provide an update on the latest RDA-related developments and offer tips for RDA implementation. Emily is the Government and Information Services Librarian at the Nebraska Library Commission.

Friday, October 12, 2012

Iconify might be useful to GLAM institutions when they have displays or exhibits. The Scout Report has this to say:

Creative types will love Iconify. The basic premise of this web-based application is that it allows creative professionals to turn their work, drawings, photographs, and sketches into both a streamlined website and a downloadable app. Visitors can upload their images and graphics and tinker with them to get things just as they want them. After that, they can share their portfolios via a wide range of social networking sites. This version is compatible with all operating systems.

Currently it is in beta and open only by invitation. No mention of metadata, so it won't replace your current repository but it just might be a nice supplement for special events.

NISO will hold its next open teleconference in our monthly series this coming Monday, October 15th at 3:00 PM Eastern Daylight Time.

The topic for the September call will be the new work toward a Recommended Practice for Monograph Demand-Driven Acquisitions. More information on this Working Group can be found at http://www.niso.org/workrooms/dda/.

Launched just last month, this work group will develop a flexible model for DDA that works for publishers, vendors, aggregators, and libraries. This model will allow libraries to develop DDA plans that meet differing local collecting and budgetary needs while also allowing consortial participation and cross-aggregator implementation....

The call is free and anyone is welcome to participate. To join, simply dial 877-375-2160 and enter the code: 17800743#. All calls are held from 3-4 p.m. Eastern time.

The Open Teleconferences are an quick way to get an update on the status of a NISO initiative. The calls are informal and questions and discussion is welcome. Following the featured discussion, there is also an opportunity for the NISO community to bring up any issue or topic of interest. This is an excellent time for you to raise any concerns, project ideas, or suggestions of focus for NISO in the coming year.

If you are unable to join us, this call will be recorded and made freely available on the NISO website following the event—as are all of the Open Teleconferences. For more information or to listen to the previous call discussions, please visit: http://www.niso.org/news/events/2012/telecon/.

Thursday, October 11, 2012

The Book Industry Study Group (BISG) has started to look at a standard for subject codes, Thema.

Book industry representatives from 15 countries announced today the formation of a new, global standard to categorize and classify book content by subject. The project, initially known as Thema, was first announced during the Tools of Change Supply Chain Conference taking place during the Frankfurt International Book Fair....

The new standard is meant, initially, to work alongside existing standards such as BIC, BISAC, CLIL, etc. The long range goal is to move all markets to the global standard, helping to eliminate confusion among both upstream and downstream trading partners. BISG, working through the auspices of its Subjects Code Committee, will have responsibility for the application of the standard in the US.

Thursday, October 04, 2012

The University of Northern British Columbia has developed a Drupal module, Jarrow, to handle the process from when the student begins work to the dissemination of the final thesis.

Collecting and disseminating theses and dissertations electronically is not a new concept. Tools and platforms have emerged to handle various components of the submission and distribution process. However, there is not a tool that handles the entirety of the process from the moment the student begins work on their thesis to the dissemination of the final thesis. The authors have created such a tool which they have called Jarrow. After reviewing available open-source software for theses submission and open-source institutional repository software this paper discusses why and how Jarrow was created and how it works.

The code listed below has been recently approved. The code will be added to the applicable Value Lists for Codes and Controlled Vocabularies lists. See the specific source code lists for current usage in MARC fields and MODS/MADS elements.

The code should not be used in exchange records until 60 days after the date of this notice to provide implementers time to include newly-defined codes in any validation tables.

MARC Authentication Action Code List

The following code has been added to the MARC Authentication Action Code List list for usage in appropriate fields and elements.

The source codes listed below have been recently approved. The codes will be added to the applicable Source Codes for Vocabularies, Rules, and Schemes lists. See the specific source code lists for current usage in MARC fields and MODS/MADS elements.

The codes should not be used in exchange records until 60 days after the date of this notice to provide implementers time to include newly-defined codes in any validation tables.

Description Convention Source Codes
The following source code has been added to the Description Convention Source Codes list for usage in appropriate fields and elements.

Addition:

pn

Provider-Neutral E-Resource MARC Record Guidelines (Library of Congress, Program for Cooperative Cataloging)

Name and Title Authority Source Codes
The following source code has been added to the Name and Title Authority Source Codes list for usage in appropriate fields and elements.

Friday, September 28, 2012

20 Authors! Free and open to the public! The goal of the Tweens Read Book Festival is to celebrate and promote reading by connecting tweens with authors. The target audience for the event is students in grades 5-8. For more information, visit www.tweensread.com

Wednesday, September 19, 2012

Update No. 15 (September 2012) is now available on the MARC website (www.loc.gov/marc/). It is integrated into the documentation for each of the Online Full and Concise formats that are maintained on that site -- the Bibliographic format, Authority format, Holdings format, Classification format, and Community Information format. The documentation includes changes made to the MARC 21 formats resulting from proposals which were considered by the ALA ALCTS/LITA/RUSA Machine-Readable Bibliographic Information Committee (MARBI), the Canadian Committee on MARC (CCM) and the BIC Bibliographic Standards Group in June 2012.

The changes are indicated in red. Each format also has an appendix,"Format Changes for Update No. 15 (September 2012)" that lists the changes that comprise the update. The Web version of the formats is the official version and is considered the start for implementation planning for MARC 21. Users are not expected to begin using the new features in the format until 60 days from the date of this announcement: September 19, 2012. For more information about format documentation
see: http://www.loc.gov/marc/status.html

Users who want a print version of changed fields will be able to print them from the Format web pages (CDS will not be selling a print version of the update). Changes to the MARC 21 Formats that resulted from Update No. 15 (September 2012) are displayed in red print. The "Format Changes" appendixes described above also incorporate a print guide following the list of changes, to facilitate easy printing.

If you want to update your print copies read that last paragraph again. The update in not one document you can print out like all the others listed in the update section at the MARC website.

The Public Knowledge Project (PKP) is very pleased to announce the 1.0 (Beta) release of Open Monograph Press (OMP). OMP is an open source software platform for managing the editorial workflow required to see monographs, edited volumes, and scholarly editions through internal and external review, editing, cataloguing, production, and publication. OMP will operate, as well, as a press website with catalog, distribution, and sales capacities.

It includes support for ONIX metadata. "As with all PKP software, OMP can be downloaded for free and installed on a local webserver or it can be hosted by PKP Publishing Services."

Editing bibliographic data is an important part of library information systems. In this paper we discuss existing approaches in developing of user interface for editing MARC records. There are two basic approaches, screen forms that support entering bibliographic data without knowledge of the MARC structure and direct editing of MARC records that is shown on the screen. The main result presented in the paper is Eclipse editor for MARC records that fully supports editing of MARC records. It is written in Java as Eclipse plug-in so it is platform-independent. It can be extended for using with any data store. At the end, the paper presents Rich Client Platform application made of MARC editor plug-in which can be used outside of Eclipse. The practical application of the results is integration of created Rich Client Platform application in BISIS library information system.

Friday, September 14, 2012

Are there any planetary science libraries or libraries in the Houston/Galveston area using EOS? If so, would you like to participate in Cross Library Search with the Lunar and Planetary Institute? Just let me know. Thanks.

The Policy and Standards Division of the Library of Congress has received a revision proposal for the Malay (in Jawi-Arabic script) ALA-LC romanization table from LC's Jakarta Office. The proposal recommends the addition of two characters as well as a new note to reflect changes in current Malay printed materials.

The source codes listed below have been recently approved. The codes will be added to the applicable Source Codes for Vocabularies, Rules, and Schemes lists. See the specific source code lists for current usage in MARC fields and MODS/MADS elements.

The codes should not be used in exchange records until 60 days after the date of this notice to provide implementers time to include newly-defined codes in any validation tables.

Classification Scheme Source Codes
The following source code has been added to the Classification Scheme Source Codes list for usage in appropriate fields and elements.

Addition:

ics

International Classification for Standards (International Organization for Standardization)

Description Convention Source Codes
The following source code has been added to the Description Convention Source Codes list for usage in appropriate fields and elements.

The code listed below has been recently approved. The code will be added to the applicable Value Lists for Codes and Controlled Vocabularies lists. See the specific source code lists for current usage in MARC fields and MODS/MADS elements.

The code should not be used in exchange records until 60 days after the date of this notice to provide implementers time to include newly-defined codes in any validation tables. MARC Authentication Action Code List

The following code has been added to the MARC Authentication Action Code List list for usage in appropriate fields and elements.

Addition:

nznb

New Zealand national bibliography (Wellington: National Library of New Zealand)

Wednesday, September 12, 2012

In this webinar, Jonathan Rochkind, Senior Programmer/Analyst at the Sheridan Libraries at Johns Hopkins University, demonstrated how Umlaut allows you to de-couple your "link resolver" (or "known item service") user-facing UI from your underlying knowledge base products—theoretically making it possible to switch out one vendor's knowledge base product for another with no interruption to your users (or to your local applications using Umlaut's API rather than a specific vendor's proprietary API).

The Open Discovery Initiative (ODI), a working group of the National Information Standards Organization (NISO), has been formed to develop a Recommended Practice related to the index-based discovery services for libraries. ODI aims to investigate and improve the ecosystem surrounding these discovery services, with a goal of broader participation of content providers and increased transparency to libraries.

An important component of our work involves gathering information from the key stakeholders: libraries, content providers, and developers of discovery products.

If you are involved in discovery services we request that you respond to our survey. The survey results will provide essential information to the workgroup as it develops recommended practices related to discovery services. A full report on the findings of this survey will be made available publically on the NISO website later this year.

We are especially interested in input from:

libraries that have implemented or plan to implement a discovery service and

organizations that potentially contribute content to one or more of these services:

primary publishers,

producers of aggregated databases of citation or full-text content for libraries, and

WorldCat continues to grow! As indicated earlier this year, the OCLC Control Number is anticipated to reach one billion after July 1, 2013. At that point, OCLC will increase the length of the OCLC number to accommodate a variable length number string. If you use and/or store OCLC MARC bibliographic records and the OCLC Control Number, you will notice a change after July 1, 2013. You will need to check the systems at your institution that use OCLC MARC bibliographic records and the OCLC number. You may need to implement changes to ensure those systems will be able to successfully handle the longer OCLC number effective July 1, 2013.

Tuesday, September 04, 2012

It’s been a long time coming, but Resource Description and Access (RDA), the new cataloging code, will be implemented by the Library of Congress next year – are you ready? In this session, Emily Nimsakont, the NLC’s Cataloging Librarian, will provide an update on the latest RDA-related developments and offer tips for RDA implementation.

Wednesday, August 29, 2012

The source codes listed below have been recently approved. The codes will be added to the applicable Source Codes for Vocabularies, Rules, and Schemes lists. See the specific source code lists for current usage in MARC fields and MODS/MADS elements.

The codes should not be used in exchange records until 60 days after the date of this notice to provide implementers time to include newly-defined codes in any validation tables.

Cartographic Data Source Codes
The following source codes have been added to the Cartographic Data Source Codes list for usage in appropriate fields and elements.

Additions:

cwg

Cambridge world gazetteer: a geographical dictionary (Cambridge ; New York: Cambridge University Press)

Tuesday, August 28, 2012

The videos and presentations from the satellite meeting Bibliography in the digital age arranged by IFLA Bibliography Section and IFLA Cataloguing Section are made available online by our kind host Biblioteka Narodowa, Warsaw, Poland at: http://www.bn.org.pl/ifla-2012/presentations-and-videos/

The presentations are:

Aniko Dudás: Who are the users and what are their expectations?

Tuula Haapamäki and Sinikka Luukkanen: Cataloguing Policy in the National Library of Finland - strategies and practices for the metadata of digital resources

Hanne Hørl Hansen: Online materials in The Danish National Bibliography

Wednesday, August 22, 2012

The National Information Standards Organization (NISO) announces the publication of a new American National Standard, JATS: Journal Article Tag Suite, ANSI/NISO Z39.96-2012. JATS provides a common XML format in which publishers and archives can exchange journal content by preserving the intellectual content of journals independent of the form in which that content was originally delivered. In addition to the element and attribute descriptions, three journal article tag sets (the Archiving and Interchange Tag Set, the Journal Publishing Tag Set, and the Article Authoring Tag Set) are part of the standard. While designed to describe the textual and graphical content of journal articles, it can also be used for some other materials, such as letters, editorials, and book and product reviews....

The JATS standard is available as both an online XML document and a freely downloadable PDF from the NISO website (www.niso.org/workrooms/journalmarkup). Supporting documentation and schemas in DTD, RELAX NG, and W3C Schema formats are available at: jats.nlm.nih.gov/.

In 2011, the Association for Library Collections and Technical Services Preservation and Reformatting Section charged a task force to develop guidelines for libraries digitizing content with the objective of producing digital products that will endure. The intent of this document was to build off past works. The authors reviewed previous research, practices at over 50 organizations, and samples of digitized works to determine a recommendation of minimum specifications for sustainable digitized content. The recommendations are not intended to dictate specific technical specifications at any given institution, but rather a floor that should not be dropped below. This draft was the result of the task force’s work. It is now up for general comment before it is published in its final version.

The Music Discovery Requirements document addresses the unique needs posed by music materials which must be considered for successful discovery. This document discusses the issues and when possible gives concrete recommendations for discovery interfaces. Three appendixes compile technical details of the specific indexing recommendations in spreadsheets. The document was created under the auspices of Music Library Association's Emerging Technologies and Services Committee and officially approved by the Music Library Association's Board of Directors.

Friday, August 17, 2012

So I am really pleased to announce that you can now download a significant chunk of that data as RDF triples. Especially in experimental form, providing the whole lot as a download would have bit of a challenge, even just in disk space and bandwidth terms. So which chunk to choose was a question. We could have chosen a random selection, but decided instead to pick the most popular, in terms of holdings, resources in WorldCat – an interesting selection in it’s own right.

To make the cut, a resource had to be held by more than 250 libraries. It turns out that almost 1.2 million fall in to this category, so a sizeable chunk indeed. To get your hands on this data, download the 1Gb gzipped file. It is in RDF n-triples form, so you can take a look at the raw data in the file itself. Better still, download and install a triplestore [such as 4Store], load up the approximately 80 million triples and practice some SPARQL on them.

Tuesday, August 14, 2012

Last week, we sent out our first action alert using our new advocacy tool, Mobile Commons. This was an exciting first step because it was our first time using mobile technology with our network of strong library advocates.

Mobile Commons allows us to send text message alerts to our mobile list. From there, our advocates can connect directly to their legislators simply by responding to the text. Mobile Commons also enables us to post click-to-call alerts on our webpages. The alert connects advocates, whether they're on our mobile list or not, to their legislator's office simply by entering their phone number on our page and clicking "call."

During the week of July 30, we used a text-to-call alert and a click-to-call alert to voice concerns over the Cybersecurity Act of 2012. Thanks to groups like the Electronic Frontier Foundation and others linking to our page, we were able to generate over 300 calls into the U.S. Senate in support of amendments that protect privacy online. That type of support helped lead to debate on the bill being halted, most likely for the rest of the session.

As we move forward in this legislative year, I highly encourage you to sign up for text alerts. It's as simple as texting "library" to 877877 or signing up online. It's a great way to stay up to date on library issues and to engage in hassle free advocacy.

Thursday, August 09, 2012

OCLC has a video on YouTube, Linked Data for Libraries. "A short introduction to the concepts and technology behind linked data, how it works, and some benefits it brings to libraries." A nice fifteen minute introduction.

Creative Commons is a nonprofit organization that enables the sharing and use of creativity and knowledge through free legal tools.

Our free, easy-to-use copyright licenses provide a simple, standardized way to give the public permission to share and use your creative work — on conditions of your choice. CC licenses let you easily change your copyright terms from the default of “all rights reserved” to “some rights reserved.”

Creative Commons licenses are not an alternative to copyright. They work alongside copyright and enable you to modify your copyright terms to best suit your needs.

Tuesday, August 07, 2012

The most important improvements of the new release compared to DBpedia 3.7 are:

the new release is based on updated Wikipedia dumps dating from late May / early June 2012.

the DBpedia ontology is enlarged and the number of infobox to ontology mappings has risen.

the DBpedia internationalization has progressed and we now provide localized versions of DBpedia in even more languages.

The English version of the DBpedia knowledge base currently describes 3.77 million things, out of which 2.35 million are classified in a consistent Ontology, including 764,000 persons, 573,000 places (including 387,000 populated places), 333,000 creative works (including 112,000 music albums, 72,000 films and 18,000 video games), 192,000 organizations (including 45,000 companies and 42,000 educational institutions), 202,000 species and 5,500 diseases.

We provide localized versions of DBpedia in 111 languages. All these versions together describe 20.8 million things, out of which 10.5 mio overlap (are interlinked) with concepts from the English DBpedia. The full DBpedia data set features labels and abstracts for 10.3 million unique things in 111 different languages; 8.0 million links to images and 24.4 million HTML links to external web pages; 27.2 million data links into external RDF data sets, 55.8 million links to Wikipedia categories, and 8.2 million YAGO categories. The dataset consists of 1.89 billion pieces of information (RDF triples) out of which 400 million were extracted from the English edition of Wikipedia, 1.46 billion were extracted from other language editions, and about 27 million are data links into external RDF data sets.

The California Digital Library (CDL) is pleased to announce the release of version 3.1 of XTF (http://xtf.cdlib.org/), an open source, highly flexible software application that supports the search, browse and display of heterogeneous digital content. XTF provides efficient and practical methods for creating customized end-user interfaces for distinct digital content collections and is used by institutions worldwide.

Major features in the 3.1 release include:

Improved schema handling for EAD finding aids. In addition to EAD 2002 DTD, XTF now provides support for search and display of:

EAD 2002 schema and EAD 2002 RelaxNG finding aids

Output from Archivists' Toolkit and Archon

Better OAI 2.0 conformance

Dynamic site maps to support optimal search engine indexing

See the 3.1 change log (http://xtf.cdlib.org/documentation/changelog/#3.1) for further details.

Fixed Field coding of GPub Government Publicationhttp://www.oclc.org/bibformats/en/fixedfield/gpub.shtm changed significantly. Now it is important to consider the “*Status of the governmental entity.* Choose a code based on the status of the jurisdiction at the time of publication, e.g., for Texas government publications, use code /f/ for the period 1836-1845 and code /s/ for the period 1845-“

Field 040 Cataloging Sourcehttp://www.oclc.org/bibformats/en/0xx/040.shtm Definition now includes this statement: “Historically in WorldCat the absence of subfield ‡b has indicated that English is the language of cataloging. OCLC now recommends always coding this element.”

Wednesday, August 01, 2012

The Policy and Standards Division has posted a document: Summary of Programmatic Changes to the LC/NACO Authority File at: http://www.loc.gov/aba/rda/pdf/lcnaf_rdaphase.pdf This process of programmatic changes signals the initial phase of RDA implementation in the authority file that was agreed upon with the Program for Cooperative Cataloging. The recoding of the LC/NAF will take place in two phases:

Phase One will consist only of adding a 667 note to the name authority record (started on July 30, 2012)

Phase Two will consist of the actual progammatic changes to the 1XX heading that are not acceptable under RDA (e.g., changes to Bible headings, spelling out Dept. and months, etc., in the subfield $d for personal names). This Phase is scheduled to take place before March 31, 2013.

The summary provides guidance to RDA catalogers to help determine what to do when encountering a name authority record with the 667 note added in this Phase One.

Tuesday, July 31, 2012

The Middle East Librarians' Association (MELA) has a new website devoted to cataloging.

The MELA Committee on Cataloging (ConC) is pleased to announce the launch of its new website, which incorporates the Arabic Cataloging Manual (ACM). The site provides information and useful resources for catalogers and librarians of Arabic, Persian and other languages of the Middle East, and serves as a forum for communication with colleagues nationally and internationally. With its new site, ConC aims to stimulate an exchange of ideas about current standards, emerging trends and best practices.

Monday, July 30, 2012

Here is an information need I have that, finding where movies and TV shows are available. For instance, if I want to watch the old BBC show This Life, where can i find it? Netflix, Hulu+, Amazon, Vudu, or ...? It is even worse if I'm looking to stream a movie. There is Plix, Flixter, Crackle, epix, Acorn, fandor, Movie Vault, Pub-D-Hub, Snag Films, etc. The list seems endless.

I need a place to search for TV shows and movies across all of these services and let me know where I can find it. Does it exist and I just don't know about it or does it not yet exist?

31 July, 2012 Update. Gary Price of InfoDocket graciously steered me towards a few tools. The best seems to be Clicker. When I searched for This Life it found nothing, but did give an overview of the show. I then searched for the motion comic Torchwood: Web of Deceit . It found it on iTunes and Vudu but missed the one on Amazon downloads. I couldn't spot a list of the services it indexes, so it is hard to tell just what is missing.

Some other tools that might be useful in some cases are the app i.TV, LocateTV, and The Internet Movie Database (IMDB).

We have completed development of an online set of training modules (available at no charge) for the Dewey Decimal Classification (DDC). The modules are based on DDC 23, and each consists of a slide presentation and a set of exercises. Several of the modules treat general principles governing the operation of the DDC; others treat the structure and use of specific tables and main classes. The presentations and exercises assume the availability of the latest version of the DDC database (i.e., WebDewey), and a professor, trainer, and/or experienced Dewey user for offering explanations and fielding questions.

The availability of many of the modules has been announced previously. What’s new now is that (1) the set of modules covers all of the DDC schedules and tables (modules for the 500s and 600s are newly provided), and (2) all modules have been updated to match DDC 23.

Monday, July 23, 2012

Who: Texas Library Association’s District 8
What: Fall Conference
Where: LoneStar College – CyFair
When: Saturday, September 29, 2012
Why: Networking, continuing education, career development and a good time with great colleagues.

Please submit a program proposal. Online submissions can be made via this Google Form.

The ALA-LC Romanization tables are developed jointly by the Library of Congress (LC) and the American Library Association (ALA). Romanization schemes enable the cataloging of foreign language materials. Romanized cataloging in turn supports circulation, acquisitions, serials check-in, shelflisting, shelving, and reference, particularly in library catalogs that are unable to display non-roman alphabet information.

The ALCTS Committee on Cataloging: Description and Access (CC:DA) recently reviewed and approved a proposal from LC for a new Cherokee romanization table. It has been subsequently approved by the Cherokee Tri-Council meeting in Cherokee, North Carolina. (Press coverage of the meeting is available online.) This is the first ALA-LC romanization table for a Native American syllabary.

The PCC Acceptable Headings Implementation Task Group (PCCAHITG) has successfully tested the programming code for Phase 1 in preparation for the LC/PCC Phased Implementation of RDA as described in the document entitled The phased conversion of the LC/NACO Authority File to RDA found at the Task Group's page.

Changes to name authority records (NARs) that are not susceptible to a mechanical change under Phase 2 will have a 667 note aded to them with the statement:

THIS 1XX FIELD CANNOT BE USED UNDER RDA UNTIL THIS RECORD HAS BEEN REVIEWED AND/OR UPDATED

Some other enhancements will be made, when applicable, if the record is being updated to add the note (e.g., 046 fields are added to records for persons when the information is available).

If a record is a candidate for a mechanical change in Phase 2, no note will be added to the record during Phase 1.

The records will be changed in the LC/NACO master file at the Library of Congress and the daily distribution to the other NACO nodes will begin no earlier than July 30, 2012 with an initial distribution of no more than 30,000 records per day. Please note that this increase is in addition to the normal daily exchange with the LC/NACO partners that as a consequence will result in larger weekly distribution loads for subscribers to the MARC Distribution Service Name Authorities database. The daily updates will continue until approximately 420,000 records have been distributed. Lessons learned regarding distibution of changed records during this phase will inform the PCCAHITG as they plan for the handling of Phase 2.

Friday, July 13, 2012

The University of California Curation Center (UC3) at the California Digital Library (CDL) has announced the formation of the first discussion group for ARKs (Archival Resource Keys).

The group is intended as a public forum for people interested in sharing with and learning from others about how ARKs have been or could be used in identifier applications.

The forum is also intended as a mechanism for the CDL/UC3, in its role as the ARK scheme maintenance agency, to seek community feedback on a number of longer term issues and activities, including

publishing the ARK specification as an Internet RFC,

clarifying local and global resolution options, and

understanding metadata retrieval in a linked data environment.

The number of institutions that have registered interest in assigning ARKs, currently over 100, has grown steadily in ten years. The ARK scheme specification was recently renewed as an Internet-Draft and has been stable since 2008. A few small changes are expected before proposing it as an Internet RFC. We hope you will consider joining in the discussion about these topics and others.

The source codes listed below have been recently approved. The codes will be added to the applicable Source Codes for Vocabularies, Rules, and Schemes lists. See the specific source code lists for current usage in MARC fields and MODS/MADS elements.

The codes should not be used in exchange records until 60 days after the date of this notice to provide implementers time to include newly-defined codes in any validation tables.

Subject Heading and Term Source Codes

The following source codes have been added to the Subject Heading and Term Source Codes list for usage in appropriate fields and elements.

Wednesday, July 11, 2012

A proposal for a Tod-Oirat-Old Kalmyk romanization table was developed in 1998 by Wayne Richter of Western Washington University and circulated in CSB 83. No further action was taken at that time. The Policy and Standards Division is interested in completing work on this table and it is available for review at Tod-Oirat-Old Kalmyk Romanization [PDF, 159 KB].

Thursday, July 05, 2012

District 8 of the Texas Library Association (TLA) has issued a Call for Proposals for the District Conference. The conference is a little early this year, so please submit by July 31. "Your experience and expertise will make an enriching and diverse conference for all types of librarians and library staff."

In this article we present a prototype of a semantic web-based framework for collecting and sharing user-generated content (reviews, ratings, tags, etc.) across different libraries in order to enrich the presentation of bibliographic records. The user-generated data is remodeled into RDF, utilizing established linked data ontologies. This is done in a semi-automatic manner utilizing the Jena and the D2RQ-toolkits. For the remodeling, a SPARQL-construct statement is tailored for each data source.

In the data source used in our prototype, user-generated content is linked to the relevant books via their ISBN. By remodeling the data according to the FRBR model, and expanding the RDF graph with data returned by WorldCat’s FRBRization web service, we are able to greatly increase the number of entry points to each book. We make the social content available through a RESTful web service with ISBN as a parameter. The web service returns a graph of all user-generated data registered to any edition of the book in question in the RDF/XML format. Libraries using our framework would thus be able to present relevant social content in association with bibliographic records, even if they hold a different version of a book than the one that was originally accessed by users. Finally, we connect our RDF graph to the linked open data cloud through the use of Talis’ openlibrary.org SPARQL endpoint.

The GLIMIR project at OCLC clusters and assigns an identifier to WorldCat records representing the same manifestation. These include parallel records in different languages (e.g., a record with English descriptive notes and subject headings and one for the same book with French equivalents). It also clusters records that probably represent the same manifestation, but which could not be safely merged by OCLC’s Duplicate Detection and Resolution (DDR) program for various reasons. As the project progressed, it became clear that it would also be useful to create content-based clusters for groups of manifestations that are generally equivalent from the end user perspective (e.g., the original print text with its microform, ebook and reprint versions, but not new editions). Lessons from the GLIMIR project have improved OCLC’s duplicate detection program through the introduction of new matching techniques. GLIMIR has also had unexpected benefits for OCLC’s FRBR algorithm by providing new methods for identifying outliers thus enabling more records to be included in the correct work cluster.