Friday, March 04, 2005

Catalogablog turns 3 on Sat. March 5. The past 3 years have been fun and a learning experience. Almost 1800 postings, so much going on in the world of cataloging. I hope you have enjoyed this long strange trip.

This year’s pre-LPSC Education and Public Outreach workshop explores "Pre-Service Teacher Preparation and the Role of the Earth and Space Science Community," and is being held immediately prior to the start of the 36th Lunar and Planetary Science Conference (LPSC).

The workshop will include presentation of pre-service program structures and models of collaboration in the Earth and space science community, discussion of how Earth and space science content is being - or can be - integrated effectively in pre-service programs, and exploration of possible roles researchers and education specialists can offer to the pre-service community. Speakers include Dr. Tim Slater, Director of the University of Arizona Science and Mathematics enter, Dr. Adriane Dorrington, Director of the NASA/Norfolk State University Pre-Service Teacher Program, and Dr. Lawrence Abraham, Co-Director of Uteach at the University of Texas at Austin.

Who should attend? Earth and space scientists, pre-service faculty, formal and informal educators, and education specialists who are interested in sharing experiences, learning more, and building collaborations.

By identifying, enabling, and leveraging partnerships, the Earth and space science community can help facilitate better preparation of science teachers, and bring the excitement of science directly into the classroom. The workshop, designed to support NASA’s goal to make its science content available to all educators and students, is hosted by the Pre-Service Educators Working Group, part of NASA’s Science Mission Directorate’s Support Network.

The workshop will be held on Sunday, March 13, 2005, from 9:00 a.m. to 4:00 p.m. at the South Shore Harbor Resort and Conference Center in League City, Texas. A light breakfast and lunch will be provided. The workshop is free, but registration is required. Participants can register using an electronic registration form provided on the LPI’s Web site. Logistical information, including directions to South Shore Harbour, can be found on the LPSC Web site. For questions or additional information, please contact Stephanie Shipp (shipp@lpi.usra.edu; 281-486-2109).

The primary intention of the POI is as a relatively persistent identifier for resources that are described by metadata 'items' in OAI-compliant repositories. Where this is the case, POIs are not explicitly assigned to resources - a POI exists implicitly because an OAI 'item' associated with the resource is made available in an OAI-compliant repository. However, POIs can be explicitly assigned to resources independently from the use of OAI repositories and the OAI-PMH if desired. As such, the POI can be seen as a possible mechanism for implementing cool URIs

Wednesday, March 02, 2005

When I noted the revision to the ONIX standard I asked if anyone has access to ONIX records or are they hidden behind firewalls at the publisher's sites. LC has access, but I've not heard from anyone that a publisher is making their ONIX records available to the public.

Here is what LC is doing with the records:

The Library receives some data directly from publishers in the ONIX format. The project creates electronic TOC from that data. Hyper-links are made from the TOC to the catalog record and the reverse, allowing researchers to move to or from the Library's online catalog where they can make additional searches. As in the Digital TOC project, Library of Congress Subject Headings are added to the TOC data, thus enhancing further search options.

This experiment links catalog records to their associated reading group guides on the Web.

An outgrowth of the ONIX TOC initiative is the creation of records that contain publisher's descriptions of books. Based on ONIX encoded materials, file creation and linking is similar to that of the ONIX TOC initiative.

This project makes links from LC catalog records to copies of sample texts from publishers (such things as a first chapter, book jacket illustration, images, etc.) , that with the publisher's permission, have been stored at the Library to insure long-term availability. By linking the catalog record to these items, the project significantly enhances the information about a book that the Library makes available to a researcher.

LC often receives a number of dust jacket images along with data utilized in the ONIX TOC and ONIX Descriptions projects. As the provision of the dust jacket image further enriches the information about an item for the researcher, BEAT intends to add links for such data through its dust jacket initiative. The project will begin by linking to some 2,300 images currently on-hand. As the channels through which the library receives ONIX data are already established It is anticipated that this number will grow

ONIX data often includes information about contributors, and BEAT has undertaken a biographical information initiative that makes this information available to researchers. The information is being linked from the catalog record to data stored on the Web. This will allow web users to encounter the information and in turn access the underlying catalog record as well as utilize the consequent access to the LC catalog and to identify related items therein

Tuesday, March 01, 2005

A new twist in folksonomies is the ability to distinguish between the type of information being tagged. Just as we distinguish between a 600 and a 610 tag. At Wists they provide this information:

Advanced tagging: Wists allows you to create groups of tags, called 'themes', in order to manage large numbers of tags better. For example: to bookmark a Sushi Restaurant in New York you could enter: Restaurant location=ny type=sushi. You can invent as many themes as you like.) Multi-word tags: use underscores if your tag names are phrases e.g. latin_america.

Folks are using themes such as location and stock ticker code. There does not seem to be a search feature at the site.

It will be interesting to see the pull that develops between the needs of complex searching requirements and the need to keep things simple. Dublin Core has experienced this division, almost from the start, and that was among a much less diverse audience. As people are able to create any theme they wish the splitting of like information into different themes may prove a problem or folk may gravitate to the commonly used term. For instance, location, place, geo, city, state, country, and where could be used as a place to store geographic information. Interesting to see what develops. There is a dissertation topic here for someone. Wists was seen on Library Stuff.

Registration for the NASIG 20th Annual Conference begins today, March 1, 2005. The 2005 NASIG Conference will be held in Minneapolis, Minnesota, on the Mississippi River, May 19-22 at the Hilton Minneapolis. Minneapolis, and its twin city, St. Paul are two of the most vibrant metropolitan cities in the Upper Midwest, rich with theaters, art museums, and other cultural and entertainment venues. Come for the varied and intriguing conference program covering hot topics from the serials world today, celebrate NASIG’s 20th anniversary, and explore Minneapolis/St. Paul. Visit the Mall of America, the choice destination for shoppers around the world, enjoy the Minneapolis skyline, or take a brisk walk over to the Stone Arch Bridge along the Mississippi riverfront.

Registration for the 2005 NASIG Conference is online only. Hotel reservations may be made online or by calling 888-933-5363, but you must use the group code "NSG" to get rooms at conference rate.

Northwest, the conference’s Preferred Airline, is offering discounts ranging from 3-15%; details and instructions are available in our conference website.

The NASIG Program Planning Committee has spent many hours putting together an excellent program for this year’s conference. You can find the complete Program Guide online.

Monday, February 28, 2005

We began by installing an Apache Web server. Many folks are still using version 1.3 rather than 2.0. One reason is because they would loose the functionality of AxKit, an XML transformation engine.

Next we examined Perl. Very common and useful language. Learning Perl was recommended as an introductory text.

Z39.50. Installed Yaz, a Z39.50 client toolkit and ran some command line searches against the Library of Congress catalog. It does have a Perl interface to the API.Zebra, also from Index Data, is an indexer and server.

MARC::Record is a Perl module for reading and writing MARC records. One interesting exercise was converting MARC to XHTML. This format makes the records useful to a greater number of tools.

Swish-e. Indexed and searched a group of XHTML files. Swish-e handles metadata, so it is possible to provide fielded searches to users. Comes with a Perl API. Can provide spelling correction, thesaurus intervention and best bets.

Xsltproc and Xmllint. Xmllint validates XML files. Not very exciting but a good tool to know about. Xsltproc is exciting. It will take an XML file and transform it on the fly (requires AxKit) to another format, SQL, XHTML, DOC (for Palms), ASCII and any other plain text format. An example of the power of this is from the Alex Catalog. The one file is available in multiple formats to suit different users needs. Xsltproc can generate metadata tags from the text which can then be indexed by Swish-e.

MySQL. A database. PhpMyAdmin provides a set of PHP scripts to manage, manipulate and query the database through a Web browser window. Why don't more librarians know the SQL language? Databases are so basic to much of our work, it seems a strange lack.

Following the original issue of Release 2.1 in June 2003, and a number of corrections and minor upgrades made in December 2003 as Revision 01, we have made a few further additions, primarily to support the special needs of ONIX implementations in Australia and Canada. These are incorporated into the latest download packages, as Revision 02. Most users will be completely unaffected by these additions, and you do NOT need to change the release number in the header of your ONIX messages. The additional level of revision numbering within the release number is used only for purposes of controlling successive minor revisions: see ONIX for Books -- Product Information Message -- XML Message Specification, Section 5.

A few corrections and improvements have been made in the Release 2.1 documentation in February 2005. These do NOT affect the DTD or Schema, but users should download the revised documentation package and replace their existing copies with the latest versions.

Again, I ask does anybody have access to ONIX records or are the publishers treating them as proprietary data?