Friday, December 28, 2007

Adding COinS and DOIs to our Contribution pages. Don't look for them yet, not yet live. I've also updated the RSS feed for the pages. This is not cataloging, or is it? The metadata fits some of the activities in FRBR. Find, identify, and acquire are all aided by these bits of info. In any event, anything that makes our work easier to find, use and cite is all good for the Institute. For me it makes a nice change of pace from ISBD/MARC/AACR2.

Wednesday, December 26, 2007

Recently there has been massive growth in the use of tags as a simple, flexible way to categorize resources. Tags are often used collaboratively to help share information using website; such as del.icio.us. However, the number of tags used in such a service is extremely large, so the unstructured nature of tags limits their value when navigating these websites, and prevents users from fully exploiting tags added by others. Clustering similar tags can improve this by adding structure. In this paper we discuss techniques for deriving tag similarity and explain two tag clustering algorithms. We applied the algorithms to two datasets containing tags provided by users with common interests. The first dataset is from a tagging service used by a small group of colleagues and the second is a public, web-based service. The paper examines the effectiveness of both clustering algorithms and their robustness to the different types of data, giving suggestions of possible ways to improve the algorithms.

There has been much talk about using metadata from other communities to enrich our catalogs and/or lower the costs of cataloging. Recently there has been quite a flap on AUTOCAT when distributors have dumped minimun level records into OCLC. Now Karen Coyle has looked at Titles in Retail and Publisher Data. Real data.

Thursday, December 20, 2007

In the latest Thinking Out Loud with George and Joan George Needham revealed a startling finding from the next OCLC report. That library use does not correlate with library support. We can't assume that our members will support us in a bond issue. Nor can we assume those not using the library won't support funding. Rallying our members for a tax increase or bond issue is not the best way to get funding. We have to mobilize our supporters, whether members or not. The full report is due in the first part of 2008.

For the use of members instead of patrons, users, etc. listen to the podcast.

Wednesday, December 19, 2007

The codes listed below have been recently approved for use in MARC 21 records. The codes will be added to the online MARC Code Lists for Relators, Sources, Description Conventions.

The codes should not be used in exchange records until after February 18, 2008. This 60-day waiting period is required to provide MARC 21 implementers time to include newly defined codes in any validation tables they may apply to the MARC fields where the codes are used.

ClassificationThe following codes are for use in subfield $2 in field 084 in Bibliographic and Community Information records (Other Classification Number), in subfield $2 in field 084 in Classification records (Classification Scheme and Edition) and in subfield $2 in field 065 in Authority records (Other Classification Number).

Tuesday, December 18, 2007

The 2007 edition of the "MARC Code List for Languages" is now available from the Library of Congress. This new publication contains a list of languages and their associated three-character alphabetic codes that allow for the designation of the language or languages in MARC records. References from variant forms and specific language names assigned to group codes are included.

The list includes all valid codes and code assignments as of September 2007 and supersedes the 2003 edition of the "MARC Code List for Languages." There are 27 code additions and 12 changed code captions in this edition.

I went to a wonderful preformance of Messiah this week. It is always one of the best parts of the season. However, it really is an Easter piece. There is a Christmas section, but then it goes on to the death and aftermath. Not very Christmas. What I'd like to see is to just have the 1st part and the Hallelujah Chorus (folks would complain if that was missing) and then the Amen Chorus. Then after intermission another work could be presented. Hodie by Vaughan Williams does not get played often enough for my taste. There a plenty of works that could fill a second half. If one was a bit short a nice sing-along could fill the end of the concert.

I hope some music director in the Houston area is reading this and takes the suggestion (Ha!). Has anyone heard Hodie live?

Friday, December 14, 2007

The paper is intended to generate comments useful in making recommendations for the future direction of PCC series practices and policies. Any individuals or organizations interested in series control policies, practices, and services are welcome to comment.

VuFind is a library resource portal designed and developed for libraries by libraries. The goal of VuFind is to enable your users to search and browse through all of your library's resources by replacing the traditional OPAC to include:

Catalog Records

Locally Cached Journals

Digital Library Items

Institutional Repository

Institutional Bibliography

Other Library Collections and Resources

VuFind is completely modular so you can implement just the basic system, or all of the components. And since it's open source, you can modify the modules to best fit your need or you can add new modules to extend your resource offerings.

During spring and early summer of 2007, the School of Library and Information Science at Kent State University conducted a Delphi study on critical FRBR issues as part of an IMLS-funded project concerning the research and development of FRBR-based retrieval systems.

The greatest concern was "Need to develop cataloging rules in line with FRBR." A bit further down the list was "Need to verify and validate the FRBR model against real data and in different communities to make sure the model is valid and applicable."

We’re also starting to use this metadata to power our own applications. The OpenOffice.org Addin ships with a copy of the RDF and uses SPARQL to determine the license you’ve selected. As we continue to build out the tools around CC licenses we’ll be moving in a similar direction, looking for ways we can leverage this resource we already have.

You can build on it, too; everything we do goes into source control. You can find the RDF files in the license.rdf module. A description of the namespace is also available.

Wednesday, December 12, 2007

Open Archives Initiative Object Reuse and Exchange (OAI-ORE) defines standards for the description and exchange of aggregations of Web resources. This document provides an introduction and lists the specifications and user guide documents that make up the OAI-ORE standards.

A wiki has been put together to respond to the lack of any mention of open date in the Working Group on the Future of Bibliographic Control's report. If you agree with the statement, sign it, it is a wiki after all.

The draft report of the Library of Congress's Working Group on the Future of Bibliographic Control features many interesting suggestions. In particular we wholeheartedly endorse the vision of a bibliographic ecosystem which is "collaborative, decentralized, international in scope and web-based". However, we are concerned that the report lacks any discussion of a key component for any future of bibliographic data: open licensing and access.

What is FRBR, and why is everyone talking about it? Is it really going to revolutionize cataloguing? And if so, what form will it take? Taylor and her compadres won't even try to teach you how to construct a hierarchical catalog record. Instead, their efforts are directed towards showcasing what's possible when digital technology and traditional cataloging practice meet. Serials, art, music, moving images, maps, and archival materials are just a few of the formats covered. Not for catalogers only.

The 2007 edition of the MARC Code List for Languages is now available from the Library of Congress. This new publication contains a list of languages and their associated three-character alphabetic codes that allow for the designation of the language or languages in MARC records. References from variant forms and specific language names assigned to group codes are included. This edition contains 484 discrete codes, of which 55 are used for groups of languages.

The list includes all valid codes and code assignments as of September 2007 and supersedes the 2003 edition of the MARC Code List for Languages. There are 27 code additions and 12 changed code captions in this edition.

Thursday, December 06, 2007

The revised Character set specifications are now posted on the MARC site. They take into account the use of the full Unicode repertoire, as opposed to only the MARC-8 subset of Unicode, and also include the loss-less and lossy techniques for converting full Unicode to MARC-8 repertoire that were approved this year.

The MARC-8 specifications are still part of the document and the MARC-8 character code tables and mappings have some improved formatting, but no changes have been made to the MARC-8 to Unicode character set mappings.The XML (all MARC-8 repertoire) and comma-delimited (East Asian MARC-8 only) files are still downloadable, but we plan to improve the XML file in the near future. We are interested to know whether the comma-delimited file is used, as we may only need to offer the XML for download.

It includes links to the recently-approved amendment to the expression entity, to the 1998 text in PDF and HTML, to errata that were identified during the review process, and to a new list of basic readings about FRBR.

Tuesday, December 04, 2007

An updated specification for DC-TEXT, a syntax for serializing, or representing, a Dublin Core metadata description set in plain text, has been published as a DCMI Recommended Resource.

The "Description Set Model" of the DCMI Abstract Model [DCAM] describes the constructs that make up a DC metadata description set. This document specifies a syntax for serialising, or representing, a DC metadata description set in plain text. The format is referred to as "DC-Text". A plain text format for serialisation of such description sets is useful as a means of presenting examples in a human-readable form which highlights the constructs of the DCMI Abstract Model, and also as a means of comparing the information represented in other machine-processable formats.

Monday, December 03, 2007

The codes listed below have been recently approved for use in MARC 21 records. The codes will be added to the online MARC Code Lists for Relators, Sources, Description Conventions.

These codes should not be used in exchange records until after January 30, 2008. This 60-day waiting period is required to provide MARC 21 implementers time to include newly defined codes in any validation tables they may apply to the MARC fields where the codes are used.

Term, Name, Title Sources

The following codes are for use in subfield $2 in fields 600-657 in Bibliographic and Community Information records, and in subfield $f in field 040 (Cataloging Source) in Authority records.

Additions:

asrcrfcd

Australian Standard Research Classification: Research Fields, Courses and Disciplines (RFCD) classification (Canberra: Australian Bureau of Statistics) [use only after January 30, 2008]

asrcseo

Australian Standard Research Classification: Socio-Economic Objective (SEO) classification (Canberra: Australian Bureau of Statistics) [use only after January 30, 2008]

asrctoa

Australian Standard Research Classification: Type of Activity(TOA) classification (Canberra: Australian Bureau of Statistics) [use only after January 30, 2008]

Sunday, December 02, 2007

I'm signed up for TLA. Did it too late to get a decent rate on a conference hotel so I'll be staying behind the convention center. From the map it looks close, not a bad walk. I'm on the ballot for councilor for the Digital Library group. So, I'll be going there and to the TRGCC events.

Last Spring I suggested they have Cali Lewis on the program. She lives in Dallas and has a video podcast, Geekbrief. Her story is great, two years ago she was working at a u-rent-space place and heard about podcasting. Without any experience she and her husband started one . Now, that is their job. She has been on TV and rubs shoulders with Web 2.0 luminaries. Since I was the one to suggest her, I hope she gets a good turnout. I should be there unless it conflicts with the DL or TRGCC events, or its part of a preconference workshop. Hope not.

Friday, November 30, 2007

This note from Martha Yee was posted to the FRBR discussion e-mail list.

I have written elsewhere about the fact that our rules and our cataloging data are already considerably FRBR-ized and that what is lacking for the creation of true FRBR-ized catalogs is adequate software support. ("FRBRization: a Method for Turning Online Public Finding Lists into Online Public Catalogs." Information Technology and Libraries 2005; 24:3:77-95. [also at the California Digital Library eScholarship Repository, http://repositories.cdlib.org/postprints/715].) We already collocate all of the expressions of a work using work identifiers (formerly known as main entries). However, it is still up to the user to look through all of the various expressions and manifestations of the work and make decisions about which one is the most useful.

With the proliferation of methods of reproduction in the 20th century, this set of all of the various manifestations and expressions of a particular work has become more and more chaotic, however. At the International Conference on the Principles & Future Development of AACR in Toronto in 1997, I thought I heard a desire to revise AACR to further FRBR-ize the rules so that catalogers went beneath work collocation and performed expression and manifestation collocation to aid users in navigating this chaos. Instead, RDA seems to be headed toward an increase in chaos by atomizing the bibliographic description into lists of data elements that are all tied to the FRBR entity manifestation. As Hal Cain so eloquently put it in his September 6, 2007, post to Autocat, "Compiled bibliographic information has greater value than just the value of the separate data."

I have been a vocal critic during this process, but it occurred to me that people might not really understand what I was talking about without a demonstration code, an alternative RDA, so to speak. Thus, with the help of many generous and intelligent friends, whom I acknowledge in the introduction, I have created such a code, which you can view at http://myee.bol.ucla.edu. Since it is clear that we need to move toward more standard ways of coding our data within the sphere of the internet, I have made a stab at creating an RDF model of my cataloging code, as well. I'm certain that it is currently a very amateurish effort, as it is my first data model of any kind, but it might encourage more expert data modelers to help improve it as a group effort. (I should say that I have already received considerable help from the most generous topic map expert Alexander Johannesen). The data modelling process has already been valuable to me in that it has raised a number of issues that I suspect would arise in any effort to model the bibliographic universe (a discussion of these, including Alexander's comments and some from Sara Shatford Layne, can be found at: http://myee.bol.ucla.edu/rdfmodel.html).

It may well be that catalogers do not have enough information to collocate items at the expression and manifestation levels, and that the designers of our current Anglo-American cataloging practices were wiser than we seem to give them credit for these days in limiting collocation to the work level except in the case of prolific works, which get some expression collocation.It may also be that our illustrious leaders have so thoroughly deprofessionalized cataloging that there is no longer any personnel available to carry out this user service. If either or both of those propositions are the case, I would suggest that we abandon the current RDA development process and work instead on designing an effective RDF (or topic map?) model of our current cataloging rules and our millions of existing cataloging records.

The Yee rules also contain some suggestions for reforming our practices in other ways to bring our entity definitions into closer alignment both with those of our users and with those of our colleagues outside the Anglo-American world, in order to facilitate better international cooperation in creating a virtual international authority file.

So, with some trepidation, I put this forth for you all to tear apart (smile). Please send comments to the RDA, FRBR, and NGC4LIB lists, to my email address (myee@ucla.edu) and/or post them to my blog at: http://yeecatrule.wordpress.com/

News from LC. "Due to our printing company's error, Library of Congress Classification schedule Q: Science, 2007 edition was delivered to CDS with pages 10 and 11 missing." They have mounted the missing pages on their website.

This patch adds a few improvements to the controlled vocabulary add- on currently present in DSpace:

The Node Schema (see [dspace]/docs/controlledvocabulary.xsd) has been updated to support other types of relationships and/or properties that are part of a true thesaurus, and now all elements in this structure are properly processed and displayed by the add-on.

The add-on recognizes thesaurus/controlled vocabularies described in SKOS standard schema. This vocabulary can be created according to the W3C recommendations and must be saved with the extension ".skos".

In the DC metadata fields you wish to control, it is now possible to configure distinct vocabularies associated to specific communities. You may also define one or more generic vocabularies to be used by default on the rest of the communities. To use this functionality you have to edit the file [dspace]/config/input- forms.xml and place a new "controlled-vocabularies" element under the that you want to control.

In order to expand the use of non-Latin scripts already used in bibliographic records, the MDS-Maps, MDS-Music, MDS-Visual Materials, and MDS-Computer Files records may now include records containing Japanese, Arabic, Chinese, Korean, Persian, Hebrew, Yiddish, Greek, or Cyrillic script characters. These elements will become valid for distribution no earlier than January 2008.

Any questions regarding the data content of these records can be directed to:

Network Development and MARC Standards Office, Library of Congress The codes listed below have been recently approved for use in MARC 21 records. The codes will be added to the online MARC Code Lists for Relators, Sources, Description Conventions.

The codes should not be used in exchange records until after January 16, 2008. This 60-day waiting period is required to provide MARC 21 implementers time to include newly defined codes in any validation tables they may apply to the MARC fields where the codes are used. Term, Name,

Title Sources

The following codes are for use in subfield $2 in fields 600-657 in Bibliographic and Community Information records, and in subfield $f in field 040 (Cataloging Source) in Authority records.

Additions:

afset

American Folklore Society Ethnographic Thesaurus [use only after January 9, 2008]

aiatsisl

AIATSIS Language Thesaurus (Canberra: Australian Institute of Aboriginal and Torres Strait Islander Studies [use only after January 16, 2008]

aiatsisp

AIATSIS Place Thesaurus (Canberra: Australian Institute of Aboriginal and Torres Strait Islander Studies [use only after January 16, 2008]

aiatsiss

AIATSIS Subject Thesaurus (Canberra: Australian Institute of Aboriginal and Torres Strait Islander Studies [use only after January 16, 2008]

Other Standard Identifier

The following code is for use in subfield $2 in field 024 in Bibliographic and Coommunity Information records (Other Standard Identifier).

Addition:

gtin-14

Global Trade Identification Number 14 (EAN/UCC-128 or ITF-14) [use only after January 16, 2008]

Friday, November 16, 2007

OCLC, the Bibliothéque nationale de France, the Deutsche Nationalbibliothek and the Library of Congress have signed a memorandum of understanding to extend and enhance the Virtual International Authority File (VIAF), a project which virtually combines multiple name authority files into a single name authority service.

In November 2006, Deanna Marcum, associate librarian for Library Services at the Library of Congress, convened a Working Group on the Future of Bibliographic Control to examine the future of bibliographic description in the 21st century in light of advances in search engine technology, the popularity of the Internet and the influx of electronic information resources.

After a year of careful and comprehensive study, the group presented its draft report to Library of Congress managers and staff in the Coolidge Auditorium. The draft report will be made available on or about Nov. 30, and a comment period on the draft report will last until Dec. 15, 2007.

Due to unprecedented demand for the live webcast, the Library has made this unedited version of the presentation available immediately. An enhanced version of this webcast, featuring the accompanying slide presentation, will be available shortly.

Where would the people of the world be without published material? Hardly any information about anything would be exchanged even in today's modern society. Published material is so ubiquitous that you couldn't avoid it if you tried. The newspapers and websites you read, the billboards you see on the way to work, and even reports on your desk at work would all make this a futile attempt. As Sarah Milstein and Tim O'Reilly explain in this presentation, the published material we see today was not just invented recently, and in fact has been being constantly updated since the days of writing on clay tablets.

I think Tim O'Reilly would be a great keynote speaker at library conferences.

21 Nov. 2007. Had a chance to listen, and it is short with not much content. It is well presented and the recording is good, so it is worth a listen, but it is not the content-filled talk I was hoping for.

Wednesday, November 14, 2007

This survey is for librarians who have supervised a library science intern or practicum student in cataloging. A great deal of discussion about cataloging education has been raised in the library community as of late, and we feel an important component of cataloging education is the practicum/internship experience.

Our intent is to include the survey results in a journal article that examines cataloging practicum/internship experiences and offers guidelines to both students and supervisors on how to create a successful cataloging practicum/internship experience. If you supervised a library science graduate student internship or practicum, we invite you to participate in this survey.

The survey is eleven questions long and should take approximately fifteen minutes. All results will remain completely anonymous. The survey is completely voluntary, and your completion of the survey implies your consent to participate in this study. You must be 18 years of age or older to participate in this survey. You are not required to answer every question and can choose to skip to the next question. The study has fulfilled the requirements for conducting human-subject research. Please provide as much detail about your experiences as possible.

The survey will be available through December 8, 2007. If you have any questions, please contact Melanie McGurr at the Ohio State University , mcgurr.2@osu.edu or Ione Damasco at the University of Dayton, ione.damasco@notes.udayton.edu.

Tuesday, November 13, 2007

Outcomes of the October 2007 meeting of the Joint Steering Committee for Development of RDA have been mounted on the JSC Web site.

The Outcomes outline a new organization for RDA which has been agreed to by the Joint Steering Committee and the Committee of Principals. Further information on the organization has also been posted on the JSC Web site. New sections of RDA will be issued for review in December 2007.

Does your library use a MARC Record Service such as SFX's MARCit! or Serials Solutions' 360 MARC Updates? If so, I invite you to participate in a brief, anonymous survey that is designed to provide information about how libraries are using different MARC record services. The goals of the survey are to identify the benefits of using these services and areas in which the services could be improved, as well as to solicit general feedback about them.

Your participation in this study will provide valuable information about a major aspect of serials cataloging: outsourcing MARC records to vendors. Your responses will inform a formally published article, which I will share with the listserves once it is finished.

Thursday, November 08, 2007

Nominations for Library Journal's Movers and Shakers that were made before November 5 were not captured and stored on LJ's server. We need you to go back and Renominate those people. We are assured that the electronic nomination form is working, but if you prefer, you can supply all the information requested on the form and either fax it to 646-746-6734, or send it in an e-mail to Francine Fialkoff, fialkoff@reedbusiness.com. The deadline has been extended to November 28.

Wednesday, November 07, 2007

All the discussion of on-line social spaces seems to miss an important way folks are connecting, through their game consoles. I'm not sure of the numbers but I'm guessing the number of people connecting using X-Box Live is not insignificant. Where are the libraries in Halo 3?

Developed by programmer Christine Haygood Deane under the direction of metadata librarian Melanie Feltner-Reichert, this open source client-side software provides control of date formats and other problematic fields at the point of creation, while shielding creators from the need to work in XML. Metadata records created can be partially created, saved to the desktop, reloaded and completed at a later date. Final versions can be downloaded or cut-and-pasted into text editors for use elsewhere.

Developed in support for our state-wide digitization project, Volunteer Voices, we hope this system will assist others in their efforts to create valuable digital libraries also. The software can be viewed here and downloaded here.

Friday, November 02, 2007

Lorcan Dempsey mentions Drill Clouds in his discussion of some interesting work being done at CISTI. They are Tag Clouds 2.0, ones that enhance the searching and presentation of results. Might be something worth considering.

Ungava extends tag clouds to make them a useful tool for search refinement. That is, to use a tag cloud to refine an existing query by adding new elements to the query through interactions with the cloud. As this results in a kind of drill-down search behaviour, these new clouds have been named drill clouds.

A meeting will be held on March 3, 2008 at Johns Hopkins University to roll-out the first beta release of the OAI-ORE specifications. These specifications describe a data model to identify and describe aggregations of web resources, and the encoding of the data model in the XML-based Atom syndication format.

Wednesday, October 31, 2007

Recently, this weblog passed the 3000 postings mark. Since Tuesday, March 5, 2002, when the first item was posted there have been at least 3000 news items I thought would be of interest to catalogers. Maybe a few less, there were some of general library interest and a couple of personal entries. Cataloging is changing. Catalogablog has tried to keep everyone, myself included, aware of what is happening. What a long strange trip its been and promises to be.

The xISSN Web service supplies ISSNs and other information associated with serial publications represented in WorldCat. Submit an ISSN to this service, and it returns a list of related ISSNs and selected metadata. The service is based on WorldCat, the world's largest network of library content and services. The current xISSN database covers 575,573 ISSNs.

Ideal for Web-enabled search applications, such as library catalogs and OpenURL Resolvers, and based on associations made in the WorldCat database, xISSN enables an end user to link to information about alternate versions of serial publications.

This is an API, requests are accepted using REST (or OpenURL or unAPI), this is not a place you can type an ISSN in a box and get back a list.

Tuesday, October 30, 2007

One bit I found interesting was that Baker and Taylor Cataloging Plus libraries are now OCLC members. These were described as small school libraries, ones that would never have joined otherwise. I wonder if this is a route the Lunar and Planetary Institute (MPOW) could use to join. We can not afford the set-up and training fees for OCLC, but just might be able to pay the yearly dues. (Maybe.) We have a lot of unique or rare records to contribute and I'd love to become a NACO library, since we have access to the planetary science community. But we can just not afford OCLC as the fees are currently structured.

I would like to thank Steve Miller for his work in updating this presentation to reflect current rules and practices (primarily the transition to biblevel "i" by OCLC and the removal of the prohibition against physical description of remote electronic resources).

I would also like to acknowledge the Subcommittee on Maintenance for CAPC Resources (David Prochazka, Paige Andrew, Richard N. Leigh, and Susan Leister) for their work in organizing a review of the resources on the CAPC web site and arranging for those that have become out-of-date to be updated or archived.

Thursday, October 25, 2007

OCLC's OpenURL Referrer is now available for Internet Explorer!Previously available only for Firefox, this popular browser extension inserts OpenURLs into Google Scholar and Google News Archive search results. It also detects and makes links out of web COinS, such as those found in Wikipedia and Worldcat.org.

Wednesday, October 24, 2007

Just had a idea, rather than an expensive bookmobile, why not deliver and pick up books by taxis? A yearly contract with the local cab company to pick-up books and bring them to the library and also deliver books to people might be less expensive than a bookmobile. The cab driver could even return the books at the end of their shifts, just collect them in a clean part of the trunk. Just an idea, not fully-formed.

Tuesday, October 23, 2007

The editors of Library Journal need your help in identifying the emerging leaders in the library world. Our seventh annual Movers & Shakers supplement will profile 50-plus up-and-coming individuals from across the United States and Canada who are innovative, creative, and making a difference. From librarians to vendors to others who work in the library field, Movers & Shakers 2008 will celebrate the new professionals who are moving our libraries ahead. Movers & Shakers 2008 will be distributed with the March 15 issue of Library Journal.

On this episode of Interviews with Innovators, host Jon Udell invites Stuart Weibel to reflect on his leading role in the Dublin Core Metadata Initiative. They also discuss how databases like the Online Computer Library Center's WorldCat - which consolidates bibliographic data from over 50,000 participating libraries - can enrich our experience of using and contributing to the web.

Monday, October 22, 2007

The following codes have been approved for use in the international language code standard, ISO 639-2 (Codes for the Representation of Names of Languages--Part 2: alpha-3 code) and are also being added to the MARC Code List for Languages. These are being published in the new 2007 edition of the MARC Code List for Languages.

New code Language name Previously coded

rup Aromanian roa

syc Syriac n/a

Note that this is a new code for Classical Syriac; the existing code "syr" is changing its caption to: Syriac, Modern

zbl Blissymbolics n/a

LC Implementation PlansSubscribers can anticipate receiving MARC records reflecting these changes in all distribution services not earlier than January 22, 2008.

News from the MARC folks. A new 2007 edition of the MARC Language Code list is now available. The publication is presented in PDF with bookmarks for navigation. The list is published from a new XML file that is also made available from the site. The services available from the XML file will be enhanced over the coming months as the other MARC code lists are also released.

Sounds like there may be an API for accessing the code lists. Nice. Then the ILS could just tap that file rather than maintain internal lists and always have current info. That would make the codes useful to other communities as well.

Friday, October 19, 2007

There is an interesting discussion on AUTOCAT about brackets. It seems ISBD has been changed so each subfield has opening and closing brackets. Like so, [S.l.] : [s.n.]. AACR would display as [S.l. : s.n.]. AACR won't be updated, but the replacement RDA currently follows the new ISBD rules. This poses the larger question, are we following AACR until RDA is published or moving in that direction now by following the new ISBD?

Yet another open-source OPAC replacement has been released, Fish4Info.

Fish4Info is not an OPAC. Why not? OPACs connect users with MARC records when what they really want are resources. Fish4Info is focused on users, and provides a more positive finding experience (as opposed to a frustrating and fruitless search). We talked about book reviews and social connections and the power of a library portal that is a destination instead of a pass-through.

The modules in the code base include:MARCImport - place a MARC file on the server and this transfers the data into Drupal nodesBCCKReview - a book review module built using CCKEZ-Amazon - helps you use an Amazon API developer’s key to access Amazon contentSome others I am sure I am forgetting, but which you will find in /drupal/sites/all/modules/…

The major authority record exchange partners (British Library, Library of Congress, National Library of Medicine, and OCLC, Inc., in consultation with Library and Archives Canada) have agreed to a basic outline that will allow for the addition of references with non-Latin characters to name authority records that make up the LC/NACO Authority File.

While the romanized form will continue to be the authorized heading (authority record 1XX field), NACO contributors will be able to add references in non-Latin scripts following MARC 21’s “Model B” for multi-script records. Model B provides for unlinked non-Latin script fields with the same MARC tags used for romanized data, such as authority record 4XX fields. Using Model B for authorities is a departure from the current bibliographic record practice of many Anglo-American libraries where non-Latin characters are exported as 880 fields (Alternate Graphic Representation) using MARC 21’s “Model A” for multiscript records.

For the initial implementation period, the use of non-Latin scripts will be limited to those scripts that represent the MARC-8 repertoire of UTF-8 (Japanese, Arabic, Chinese, Korean, Persian, Hebrew, Yiddish, Cyrillic, and Greek). Although the exchange of authority records between the NACO nodes will be in UTF-8, LC’s Cataloging Distribution Service will continue to supply the MDS-Authorities weekly subscription product in both UTF-8 and MARC-8 for some period of time. It is expected that the use of non-Latin scripts beyond the MARC-8 repertoire will be implemented in the future.

The Deutsche Nationalbibliothek, the Library of Congress, the Bibliothèque nationale de France, and OCLC are jointly conducting a project to match and link the authority records for personal names in the retrospective personal name authority files of the Deutsche Nationalbibliothek (dnb), the Library of Congress (LC), and the Bibliothèque nationale de France (BnF).

Thursday, October 18, 2007

Amazon has started selling DRM-free MP3s. They are encoded at 256 kb and sell for 89 to 99 cents each. Does this pose a threat to iTunes? Well the convience is lacking at Amazon. At iTunes you get seamless throughput from iTunes to your computer to your iPod. At Amazon each of those steps requires something on my part. A little bit better skill set.

The other part is the catalog. No Beatles, almost no U2. They do have the Frank Sinatra With Bono song. Only one song by the Corrs. A good selection of Pentangle. I'll give it a try, but with the limited catalog and being less convient, I don't see this as an iTunes killer.

Tuesday, October 16, 2007

This article explores the importance of correctly understanding, using, and interpreting map cataloging rules to provide the most accurate information possible, with the goal of making it possible to find maps quickly and accurately- whether using database retrieval or a coordinate-driven search engine. It is proposed that we can find an efficient universal method to represent locations, addresses, and areas of the world through the use of geographic coordinates for print and digital cartographic materials. Finally, the article states the strong need to standardize spatial cataloging information to improve search query responses by providing uniform information and by addressing the problems discussed in this article.

Just as MARC gives the structure and AACR guides us on how to fill that structure, there now exists the same documents for RSS. The RSS specification gives the elements, the new Really Simple Syndication Best Practices Profile gives guidelines on how to use those elements. Comments are being accepted on the document.

Originally intended to be an appendix to the 2002 AACR2 rule revisions, Differences Between, Changes Within evolved into a stand-alone document that supplements current descriptive cataloging rules by providing information about creating new records or updating existing records.

The document helps guide the cataloger in determining whether the item in hand can be cataloged with existing copy or requires a new bibliographic record. General guidelines are followed by specific guidelines for manifestation-level records for single-part monographs, multipart monographs, integrating resources, and serials. The text describes what constitutes a major difference between manifestations, requiring the creation of an original record, as well as detailing major changes within a serial manifestation that would lead to the creation of a new record. In addition, guidance is also provided to identify minor changes that would not require a new bibliographic record, but might necessitate updating an existing record.

The new edition of Differences Between, Changes Within reflects changes through the final set of amendments to AACR2, which were issued in 2005. Some guidelines have been changed and some removed. All rule references have been verified and updated wherever necessary.

Thursday, October 11, 2007

The Cataloging Policy and Support Office (CPSO) has begun creating and distributing subject authority records called "validation records" that represent valid 6XX headings plus subdivision strings (topical, chronological, geographic, and form), including strings with free-floating subdivisions for which subject authority records were not previously made. Validation records are being created to improve the "validation" capability of many integrated library management systems used by the Library of Congress and others by providing an authorized form of subject heading strings for machine matching.

The validation records are identified by the presence of the 667 field which reads: "Record generated for validation purposes." All validation records will appear in LC's online catalogs but will not be printed in the annual edition of Library of Congress Subject Headings nor will they appear as proposed headings on the LC Subject Headings Weekly List. As of Sept. 25, 2007, 1,900 validation records have been distributed. Some examples are:

sh2007005269 Abdominal wall$xAbnormalities (May Subd Geog)

sh2007100421 United States$xEconomic policy$vPeriodicals

sh2007100247 Great Britain$xRelations$zUnited States

sh2007100224 Indians of North America$vSongs and music

CPSO is creating the validation records by using a combination of one-by-one record creation as subject strings are encountered in weekly operations and use of an automated program to generate and distribute validation records without human intervention. For this latter automated method, the focus is on subject heading strings applied since the year 2002 for which the LC catalog contains fifty or more bibliographic records that include the same 6XX string. Once the automated program is tested and approved, several thousand records are expected to be generated and distributed each week. CPSO will make an announcement before the automated method is put into full production.

The task force was charged with creating a set of best practices for coding MARC 008/lang and 041 language information for videos, especially DVDs, and with using that exercise to examine whether any changes could be made to the MARC format (coding or directions) that would improve access to the multiple types of language information found on videos.

Providing access to a collection of email messages isn't something we worry much about, unless we are archivists. Still providing access is what catalogers do. An IMAP plugin for SquirrelRDF by John Recker, Davide Eynard, and Craig Sayers. HPL-2007-161.

The Semantic Web aims to make information accessible to both humans and machines, using standard formats for data and making information available in a formal and structured way. Since the advent of RDF (Resource Description Framework) there have been many efforts to extract and convert existing information in this format. In this paper we describe an adapter tool for the IMAP protocol, developed as a plugin of SquirrelRDF1, which allows users to query IMAP mailboxes using SPARQL. The information returned looks like RDF, is always current, and can be reused and integrated inside other applications.

Thursday, October 04, 2007

We are collecting your suggestions to be used in preparing a chapter on metadata decisions for the Digital Library Guidelines, a task of the IFLA- World Digital Library Working Group on Digital Library Guidelines. The Guidelines will be developed for use by libraries and other cultural institutions around the world. The purpose of this survey is to investigate different issues, levels, and concerns regarding metadata and controlled vocabularies that need to be addressed in the Guidelines.

Tuesday, October 02, 2007

marcdb is a little utility for reading in marc data into a relational database. The magic of sqlalchemy and elixir mean that you can use any supported rdbms: postgres, sqlite, mysql, etc...you'll just need to make sure you've got the relevant database driver installed.

Still available is MARC RTP for converting selected fields into a format databases accept. With these two tools and Terry Reese's MarcEdit converting MARC to other formats should be a snap.

Monday, October 01, 2007

The codes listed below have been recently approved for use in MARC 21 records. The codes will be added to the online MARC Code Lists for Relators, Sources, Description Conventions.

The codes should not be used in exchange records until after November 28, 2007. This 60-day waiting period is required to provide MARC 21 implementers time to include newly defined codes in any validation tables they may apply to the MARC fields where the codes are used. LanguageCoding Sources

The following codes are for use in subfield $2 in field 041 in Bibliographic and Community Information records (Language Code).

Thursday, September 27, 2007

For those times when Dublin Core is too complex, there is Kernel metadata. Just four elements, who, what, when and, where.

Kernel metadata is a small prescriptive vocabulary designed to support highly uniform but minimal object descriptions for the purpose of orderly collection management. The Kernel vocabulary, based on a subset of the Dublin Core (DC) metadata element set, aims to describe objects of any form or category, but its reach is limited to a small number of fundamental questions such as who, what, when, and where. The Electronic Resource Citation (ERC), also specified in this document, is an object description that addresses those four questions using Kernel and other metadata elements.

Friday, September 21, 2007

This post is personal, but it will get the word out to lots of friends and family quickly. Next Tuesday, Sept. 25. our family will grow by 3, two girls (12 & 13) and a young man (14). We will be their home for a while. They do have a much older sister in Panama, who may take custody at some point in time, or not. We expect them to be part of our family for a few to several years. Pictures on Flickras they become available.

Friday, September 14, 2007

The code listed below has been recently approved for use in MARC 21 records. The code will be added to the online MARC Code Lists for Relators, Sources, Description Conventions.

The code should not be used in exchange records until after November 13, 2007. This 60-day waiting period is required to provide MARC 21 implementers time to include newly defined codes in any validation tables they may apply to the MARC fields where the codes are used. (Description Conventions)

The following code is for use in subfield $2 in field 040 in Bibliographic and Authority records (Description Conventions).

Thursday, September 13, 2007

Today, the World Wide Web Consortium completed an important link between Semantic Web and microformats communities. With "Gleaning Resource Descriptions from Dialects of Languages", or GRDDL (pronounced "griddle"), software can automatically extract information from structured Web pages to make it part of the Semantic Web. Those accustomed to expressing structured data with microformats in XHTML can thus increase the value of their existing data by porting it to the Semantic Web, at very low cost.

"Sometimes one line of code can make a world of difference," said Tim Berners-Lee, W3C Director. "Just as stylesheets make Web pages more readable to people, GRDDL makes Web pages, microformat tags, XML documents, and data more readable to Semantic Web applications, opening more data to new possibilities and creative reuse."

Tuesday, September 11, 2007

September is Library Card Sign-up Month. It is also library card renewal month at MPOW, the Lunar and Planetary Institute. In my latest podcast I used the 10 second clip from ALA to reinforce the renewal message. (I felt like a sound engineer getting it the right volume and speed.) They have other lengths that might meet your needs better. Thanks ALA.

The OWL Web Ontology Language is endowed with two model theories, reflecting its origins as a compromise between two different communities. By design these model theories give rise to very similar semantics, and a precise statement of the correspondence between the model theories is conjectured with a sketch proof at the end of the OWL semantics specification document. We have filled in the details of this sketch proof using the Isabelle/HOL proof assistant, and developed machinery for further study of the formal semantics of OWL. Our study was sufficiently detailed to find a handful of minor errors in the specification of the semantics of OWL that previous work had overlooked. We also sought a stronger result by showing a partial converse to the known correspondence, but it proved impossible to achieve this within our time constraints; instead we conjecture a possible method for strengthening the correspondence.

Monday, September 10, 2007

We invite you to participate in Mars In and Out. A free NASA-supported workshop designed to bring earth and space science into your library and after-school children’s and community programs November 8 and 9, 2007.

Mars Inside and Out! will acquaint you with everything you need to know about the mysterious red planet to bring exciting programs to your community. You will learn about how the Martian environment has changed through time, the possibility for life on Mars, past, present, and future NASA missions to Mars, and plans and challenges for having humans living and working on Mars.

Scientists and educators from the Lunar and Planetary Institute will share space science information, resources, hands-on activities, and demonstrations developed specifically for librarians and after-school program providers to infuse into their programs with children ages 8 to 13 and their families.

During the workshop you will:

Meet NASA scientists and engineers involved in Mars exploration

Learn about Mars science, missions, and future exploration

Receive training in related hands-on science inquiry activities, designed for children ages 8 to 13

Receive related resources and materials that you can use in your programs

Explore ideas for presenting space science programs to young audiences and to other colleagues

Collaborate with other after-school program providers and children’s and youth librarians in Oklahoma and become part of the growing Explore! community

Receive a $100 stipend for attending!

The workshop is free. You will receive Mars Inside and Out! presentations, activities, and resources (posters, book lists, suggested Web sites), and the first 25 participants to register will receive a $100 stipend for completing the workshop. The materials are ready to be incorporated into your existing children’s and youth programs.

But wait — there’s more! You will also receive materials for ten additional Explore! space science topics (rockets, space stations, space colonies, egg-stronauts, solar system, shaping the planets, comets, staying healthy in space, and the Sun-Earth connection). Each of these topics has complementary hands-on activities and resources that can be found on the Explore! website.

The workshop begins at 9:00 am on Thursday, November 8, and continues until the close of the day, 5:00 pm, on Friday, November 9. Light breakfasts, lunches, and afternoon snacks will be provided and, of course, chocolate will be available, too! Participants are responsible for travel, housing, and dinner costs, and all logistical arrangements.

Space is limited; please register by 5 October to reserve your place in the workshop. Come join us for a fun-filled and learning-filled two days. We look forward to exploring Mars Inside and Out! with you. Drop me a request for a registration form.

District 8 of the Texas Library Association has announced that registration is open for those who wish to register for the Fall Meeting.

I personally like this meeting very much. I think it is the size of some state conferences, it gets about 1,000 attendees I guess. But, compared to TLA it is much more intimate. It is large enough to have a session or two I like, small enough to sit down and chat with folks I've not seen in too long.

Wednesday, September 05, 2007

EntityDescriber is an add-on tool for Connotea that allows taggers to select terms from a controlled vocabalary.

E.D. is a mechanism for intersecting the Semantic Web with the normal Web. It lets Connotea users (though we may extend it to other systems such as Del.icio.us) annotate (tag) resources on the Web with terms from existing controlled vocabularies such as MeSH, the Gene Ontology, the Atom ontology, and the Person ontology. For more thoughts on and progress with ED, see blog posts about ED.

You might enjoy using ED if any of the following apply to you:

You would like to organize your tags more effectively

You are using Connotea to create a reference system - for example for a class

You are a member of a group of people that would like to use a common set of tags - possibly with the aim of creating a nice reference library

You like the idea that every time you tag something you are contributing to the semantic web

You would like to utilize queries over your collection and others that take advantage of the structure of ontologies. For example, queries for "brain", that return resources tagged with "hippocampus", "cortex", "cerebellum", etc...

You would like to help an aging graduate student add one more chapter to his thesis...

Tuesday, September 04, 2007

Earlier I described my idea for an RSS-like XML feed for telescopes. The idea was to allow anyone to keep up with what particular telescopes were doing. In this post I will try to describe my current idea.

PERSNAME-L, exists for the purpose of dealing with issues about personal names. To subscribe to PERSNAME-L, follow this link and click on "Join or leave the list (or change settings)". Or send a message to LISTSERV@LISTS.OU.EDU with the words SUBSCRIBE PERSNAME-L followed by a forename and surname. I've found this to be a very useful group.

Thursday, August 30, 2007

Here is a very useful website for book-lovers, BookTour. Shows what authors are speaking in an area.

We're a free online service that connects authors and potential audiences of all sorts, from book groups to civic organizations, from bookstores to corporate events. Authors create their own page (biography, books, tour dates and availability) and any group looking for speakers can find them and contact them directly to arrange for an appearance. Relevant information for both authors and venues can be added in minutes through a simple fill-in-the-blanks interface. Connecting authors with potential audiences then becomes as easy as searching (by geography, book titles, subject, dates of availability) and sending an email.

There is an interview with the site's creators, Kevin Smokler and Adam Goldstein, on IT Conversations.

Zotero’s integration with word processing tools has been greatly improved. The MS Word plugin works much more seamlessly and we now support OpenOffice on Windows, Mac (in the form of NeoOffice), and Linux.

Zotero is also now better integrated with the desktop. Users can drag files from their desktop into their Zotero collection and can also drag attachments out of their Zotero collection onto their desktop.

We have begun to add tools to browse and visualize Zotero collections in new ways. Using MIT’s SIMILE Timeline widget, Zotero can now generate timelines from any collection or selected items.

Here is their description:

Zotero is an easy-to-use yet powerful research tool that helps you gather, organize, and analyze sources (citations, full texts, web pages, images, and other objects), and lets you share the results of your research in a variety of ways. An extension to the popular open-source web browser Firefox, Zotero includes the best parts of older reference manager software (like EndNote)—the ability to store author, title, and publication fields and to export that information as formatted references—and the best parts of modern software and web applications (like iTunes and del.icio.us), such as the ability to interact, tag, and search in advanced ways. Zotero integrates tightly with online resources; it can sense when users are viewing a book, article, or other object on the web, and—on many major research and library sites—find and automatically save the full reference information for the item in the correct fields. Since it lives in the web browser, it can effortlessly transmit information to, and receive information from, other web services and applications; since it runs on one’s personal computer, it can also communicate with software running there (such as Microsoft Word). And it can be used offline as well (e.g., on a plane, in an archive without WiFi).

Thursday, August 23, 2007

Scriblio, the Mellon Award winning front end for the catalog, is now available for free download. It is based on WordPress, the popular blogging tool.

Scriblio (formerly WPopac) is an award winning, free, open source CMS and OPAC with faceted searching and browsing features based on WordPress. Scriblio is a project of Plymouth State University, supported in part by the Andrew W. Mellon Foundation.

Wednesday, August 22, 2007

We have revised the draft of the MODS schema version 3.3, which we had released for review in April. The revision is based on comments from the review of that draft.

Substantive changes to the previous (April 12) version:

Add Xlink attribute to physicalLocation This would allow for a link to the website of the entity in physicalLocation. This is equivalent to MARC 21 852$u, e.g.Library of Congress

Add additional enumerated values for authority under : ISO 639-3 and RFC4646. ISO 639-3 is a new standard that codes all individual languages without the criteria for usage that ISO 639-2 has. RFC4646 updates RFC3066, which details how to use language codes in Internet applications. (RFC3066 already defined in MODS). We are planning to add these to the MARC source code list used for field 041$2.

Friday, August 17, 2007

VuFind is a library resource portal designed and developed for libraries by libraries. The goal of VuFind is to enable your users to search and browse through all of your library's resources by replacing the traditional OPAC to include:

Catalog Records

Digital Library Items

Institutional Repository

Institutional Bibliography

Other Library Collections and Resources

VuFind is completely modular so you can implement just the basic system, or all of components. And since it's open source, you can modify the modules to best fit your need or you can add new modules to extend your resource offerings.

Tuesday, August 14, 2007

The codes listed below have been recently approved for use in MARC 21 records. The codes will be added to the online MARC Code Lists for Relators, Sources, Description Conventions.

The codes should not be used in exchange records until after October 13, 2007. This 60-day waiting period is required to provide MARC 21 implementers time to include newly defined codes in any validation tables they may apply to the MARC fields where the codes are used.

Other Sources

The following code is for use in subfield $2 in field 017 in Bibliographic records (Copyright or Legal Deposit Number).

Addition:

rocgpt

R.O.C. Government Publications Catalogue(Tapei: Research, Development and Evaluation Commission, ExecutuveYuan) [use only after October 13, 2007]

The following code is for use in subfield $2 in field 042 in Authority, Bibliographic and Classification records (Authentication Code).

Addition:

ukblderived

British Library derived cataloging Code ukblderived signifies that the British Library has re-used another organization's catalog record for its cataloging. Headings have not been validated against the relevant authority file. [use only after October 13, 2007]

Term, Name, Title Sources

The following code is for use in subfield $2 in fields 600-657 in Bibliographic and Community Information records, and in subfield $f in field 040 (Cataloging Source) in Authority records.

The following code is for use in subfield $2 in fields 600-657 in Bibliographic and Community Information records, in subfield $f in field 040 (Cataloging Source) and in subfield $2 in 7xx (Linking Entry) fields in Authority records.

Addition:

tesa

Tesauro Agrcola (Beltsville, Maryland; National Agricultural Library) [use only after October 13, 2007]

Monday, August 13, 2007

SHAME is a library that leverages editors, presentations and query interfaces for resource centric RDF metadata. The central idea of SHAME is to work with Annotation Profiles which encompasses:

how the metadata in RDF should be read and modified.

what input is allowed, e.g. multiplicity and vocabularies to use.

presentational aspects like order, grouping, labels etc.

These annotation profiles are then used to generate user interfaces for either editing, presentation or querying purposes. The user interface may be realized in a web setting (both a jsp and velocity version exists) or in a stand alone application (a java/swing version exists).

Tuesday, August 07, 2007

Recently there has been plenty of discussion about the library in AZ using BISAC to arrange the collection. Phoenix Public is also adding BISAC terms to the catalog record. Personally I don't see how SCI004000 is any easier to a patron than 520 or QB, but it is good to experiment and they seem to have a significant increase in circulation. If you want to see what they are using, the BISAC classification is available online.

Tuesday, July 31, 2007

Looks like I may be presenting on tagging at the TLA District 8 Meeting. If you have any favorite tagging tools, papers or sites please let me know. The meeting will be at Aldine High School Oct 13, 2007 (Sat).

Tuesday, July 24, 2007

Tim Spalding continues to do some interesting work on tagging books. The latest effort is Tagmash, the ability to combine tags in searching.

I've just gone live with a new feature called "tagmash," pages for the intersections of tags. This is a fairly obvious thing to do, but it isn't trivial in context. In getting past words or short phrases, tagmash closes some of the gap between tagging and professional subject classifications.

It is worth reading the entire post to see the thought process that went into creating the feature.

Friday, July 20, 2007

Podcasting is not so new any more. It seems to me, it is past the time that just throwing up an MP3 file is enough. I've heard some pretty poor production that made me just move on to the next selection on my player. So, here are a few tips I've picked up doing a podcast for our library for well over a year.

Noise reduction. Record about 10-12 seconds of room sounds, as a sample, to have them removed after the recording is done. If you are recording a live event, conference presentation, record the room before it fills up with people. The air conditioning, computer fan, outside traffic and such add nothing and can be distracting. The sound of folks shuffling papers, coughing, etc., gives it a live feel. Don't worry about those. Very long pauses can be shortened.

Sound compression. Compressing the sounds removes any clipping from segments that are recorded too loud and makes everything clearer. Do this after removing any noise.

Volume. Make sure to record at a decent volume level. Then make sure the file plays back at a good level. I've downloaded files only to find they are too soft and getting the level right brings out the hum in the car's system. I just skip to the next selection. MP3Trim will do smaller files for free. Adjust the volume last.

If you have any projects in a library environment that you are using or are planning to deploy that involves Topic Maps please here is a short survey. They are trying to get a general sense of what, if anything, the library community is doing with this technology.

The latest version of pymarc has the ability to change records from MARC-8 encoding to UNICODE, UTF-8. A task that most of our catalogs will have to go through in the next few years, I guess. Nice to have a tool for when that day arrives.

The pymarc module provides an API for reading, writing and modifying MARC records from python. MARC (MAchine Readable Cataloging) is a metadata format for bibliographic data.

....

While it's not rocket science to read MARC, it's also not something you want to code very often, so pymarc does the lifting for you. pymarc allows you to read records, extract arbitrary fields from each record, update records, and write records back out in transmission format.

Thursday, July 19, 2007

The open-source Next-Gen library catalog browser, VuFind has been released. Currently only works with Voyager, other systems are planned or you could help write the code. Features include:

Search with Faceted Results

Live Record Status and Location with Ajax Querying

"More Like This" Resource Suggestions

Save Resources to Organized Lists

Tagging

Commenting

VuFind is a library resource portal designed and developed for libraries by libraries. The goal of VuFind is to enable your users to search and browse through all of your library's resources by replacing the traditional OPAC to include:

Catalog Records

Digital Library Items

Institutional Repository

Institutional Bibliography

Other Library Collections and Resources

VuFind is completely modular so you can implement just the basic system, or all of components. And since it's open source, you can modify the modules to best fit your need or you can add new modules to extend your resource offerings

Wednesday, July 18, 2007

LibX is a Firefox extension that provides direct access to your library's resources. LibX is an open source framework from which editions for specific libraries can be built. Currently, 61 academic and public libraries are offering LibX editions to their users, an additional 86 libraries are testing editions.

Tuesday, July 17, 2007

The Levels of Adoption document is intended to supplement the Digital Library Federation / Aquifer Implementation Guidelines for Shareable MODS Records, released in November 2006 under the auspices of the DLF Aquifer initiative. The Shareable MODS Guidelines represent a record-centric view of Aquifer's goals, whereas it is often helpful to set priorities for metadata creation with a user- and use-centric view. The newly-released Levels of Adoption document describes five general categories of user functionality that are likely to be supported by following specific recommendations from the Guidelines. It attempts to provide additional guidance to MODS implementers in the planning process by documenting what sorts of functionality is possible when certain elements of the Guidelines are followed.

These documents, together with an FAQ for implementation (forthcoming - stay tuned!), were written primarily to assist institutions preparing metadata for aggregation via the DLF Aquifer initiative, but the Working Group expects they could also be useful in preparing metadata for other aggregations, or for using MODS in a local environment. Comments on the Levels of Adoption are welcome, and can be sent to any Working Group member. Contact information for Working Group members is available from the Levels of Adoption page.

Monday, July 16, 2007

The Western Association of Map Libraries (WAML) is looking for folks who want to expand their knowledge of maps and geospatial information through fun-filled networking opportunities and information-packed meetings and journals!

$20 (normally $30 a year) -- Good for new members only. Membership good from now till June 30, 2008, but offer ends July 31, 2007.

The Western Association of Map Libraries (WAML) is an independent association of map librarians and other people with an interest in maps and map librarianship. Membership in WAML is open to any individual interested in furthering the purpose of the Association which is "to encourage high standards in every phase of the organization and administration of map libraries."

Membership is not limited to people living in the Western US and Canada, but is open to everyone.

BENEFITS:

Subscription to the Information Bulletin (IB)

Discounted registration fees to WAML's bi-annual meetings

Practical workshops on topics such as aerial photos, scanning projects, and map cataloging

Networking regarding geospatial and cartographic information

Participation in WAML's electronic discussion board

INFORMATION BULLETINWAML's Information Bulletin is issued three times a year and enjoys worldwide readership. It includes feature articles, photo essays, Association business, book and electronic resources reviews, new map lists, and selected news and notes.

MEETINGS!!!WAML meetings are the most fun-filled library-related events you can attend!! They occur in the Spring and Fall. They are small (around 50 people), held in great locations such as Fairbanks, Seattle and Boulder, and have great field trips and delicious banquets. The presentations deal only with geospatial topics. Roundtable discussions and workshops take place at every meeting. The registration fee runs from $35 to $60. The accommodations are reasonably priced, the camaraderie is great, and the tone is relaxed. Often, WAML has a "map exchange" where attendees bring their withdrawn and extra copies of maps and make them available for others. We are headed to the Denver in October 2007!!

Field trips have taken WAML members to national parks, volcanoes, mountain tops, museums, and vineyards/wineries.

If that weren't enough, you are invited to give presentations at the conferences OR write articles for the Information Bulletin. Presentations and papers run from the very formal to "how I done good." In the past WAML presenters and IB authors have been not just librarians but scholars, novelists, artists, map collectors, map dealers, scientists, and cartographers.

Come join us. The price is right. The offer is limited. Good times, good friends and good maps await you!