This is a brief documentation of how I used Marcedit to import correct URLs from an Excel spreadsheet into a large file of MARC records. The name of the ebook supplier has been changed to protect the innocent. The values below worked for me on the Excel spreadsheet I used.

Problem. Ebook supplier (EBS) supplies MARC records of generally good quality for a package of 600 ebooks. However, the URLs are inconsistent: there are between one and four in each record; several ebook supplies are represented, not just EBS; many of the DOIs for EBS- the only URLs that are consistent- do not work. We do have an Excel spreadsheet listing OCLC numbers and valid URLs for all titles.

General plan. To delete all the 856 fields in the MARC file and replace them with those from the spreadsheet. To do this, convert the relevant bits of the spreadsheet to a simple MARC file and merge the two using Marcedit.

Delete the URLs from the original file
Load/convert the original file as an .mrk file. Use the Tools>Add/Delete Field option to delete all the 856 fields in the original file.

Convert the spreadsheet to MARC.

In Marcedit (version 6), select Export Tab Delimited Text.

Choose the spreadsheet for the Source File

Choose a filename for the Marc text (.mrk ) file to be created

Specify the name of the sheet for an Excel file (e.g. in my case EBS)

Choose the delimiter that separates the data (in my case I left this alone as Tab. It worked)

Choose options (I left the LDR/008 and character encoding alone as I don’t think they mattered)

Next. The data snapshot shows the columns numbered Fields 0 to whatever. I needed columns A (OCLC number) and P (URL), so this meant Fields 0 and 15. The fields to select and how they work is done by using the Settings section to create Arguments. For this, I needed two arguments, one for each field:

Edit the new .mrk
As the OCLC numbers in the original MARC records were in the form “ocn123456789” (rather than simply “123456789”), I needed to do a find for “=001 “ and replace it with “=001 ocn” on the new file, then save it.

Merge

From the Tools menu of Marcedit, select Merge Records

Choose the .mrk of the original MARC records as the Source File (I don’t know if the .mrc would work too)

Choose the newly created .mrk file as the Merge File

Choose a filename for the newly merged file to be created

Leave Record identifier as 001. If you were searching on the ISBN, presumably the 020 would work but haven’t tried it. Other options are 010, 020, 022, and 035, and MARC21 (?)

I’ve tried to have a quick look at the last just to get an idea and I’ve isolated what I think is all the data for one book, chosen at random. The whole block of turtle prefixes from the start of the file are included:

BIBFRAME has worked on modelling works as Works within the BIBFRAME model, similar to the RDA modelling work, itself modelled on the work on the FRBR model of Works and Expressions. A BIBFRAME Work is a creative work, perhaps a FRBR Work, or an RDA FRBR Work but it also expresses a FRBR Expression, and of course an RDA FRBR Expression. A Work may express another Work based on others’ work, not just a FRBR Work or an RDA Work. That also works. FRBR Works or RDA Works expressed as BIBFRAME Works can relate to FRBR Expressions (BIBFRAME Works or RDA Expressions). So, Works are works that can be Works but also Expressions linked to Works that really are Works.

I have come up with two bookmarklets that allow you to search for an author’s works in a library catalogue from the author’s Wikipedia page in one click. A bookmarklet is a browser bookmark that does something with the page you’re looking at rather than just taking you to a web page: see the helpful Firefox guide for more information. The bookmarklets are identical, except that one searches UCL’s Explore (Primo) service, and the other searches COPAC. To try them:

Install one of the bookmarklets by dragging the link to your bookmarks toolbar:

You can rename them to something more snappy if you like. Next, go to a Wikipedia page for an author. The bookmarklets only work on Wikipedia pages with VIAF or LC Authorities links in them, but most major authors should be fine. Some examples to try:

How it works. The bookmarklet itself is only a short snippet of javascript; all it does is look for any links that might be VIAF or LC Authorities links. It then appends this information as a query string to a URL for a remote PHP script. The PHP script does all the hard work. It first works out the URI for the VIAF entry and has a look at it using ARC2. It looks at the RDF for the authorised LC heading, constructs a search URL for either UCL or COPAC, then redirects itself there, where its work is done. If there is a problem with the VIAF entry it tries the LC link, if there is one, in a similar way. If there is nothing, it will fail and offer to go back to the Wikipedia page or forward to the catalogue you wanted to search.

Why. One of the promises of linked data and BIBFRAME and all the rest of it, is that data from different sources can be linked together and work with each other. Since VIAF links were recently added to Wikipedia, I’ve wondered what could be done to take advantage of this in a practical way. The link does mean that from a Wikipedia page and its uncontrolled (or at least only consistent within Wikipedia) names, you can find out the authorised form of an author’s name. Charles Darwin (the famous one) is only called Charles Darwin in the title of his Wikipedia article. Search for that on a library catalogue and you’ll get all his works plus the stuff written by other Charles Darwins. With the VIAF data, we know that he is known in most (or a huge number of English-language catalogues) as “Darwin, Charles, 1809-1882″ as opposed to the other Charles Darwin mentioned above, who is “Darwin, Charles, 1758-1778″. Although most catalogues or discovery systems don’t use linked data and non-textual identifiers, the ubiquity and uniqueness of an LC heading does almost perform a similar thing (although there are caveats galore).

Many of the caveats are in the way library systems search. Both the examples used are imperfect: the UCL one, as I’ve done it, uses a facet search on top of the bare search which while eliminating some incorrect hits (where “Steve”, “Jones”, and “1944-” appear coincidentally as author elements in a search) also misses out a few hits depending on what field he appears in in the record (this is I think a fixable glitch which I intend to get fixed); the COPAC one is author free text but I’ve tried to remedy some of the potential for false hits by putting all searches in quotes.

Improvements. These are legion, but a few sketched ideas below:

Implement this as a browser extension. This was my original intention, so that someone could be browsing any old Wikipedia page and when they come across one with a VIAF (or other service) link at the bottom, a search link is created at the top of the page for them to click on. This could easily be extended in several ways:

Add subject searches. Should be straightforward, although would require moar bookmarklets, relying on the PHP script offering options, or a proper browser extension

Add more catalogues/discovery interfaces. This is again straightforward to add to the PHP script if you can figure out the web API for a search service but is subject to the same caveats as subject searches.

Add more than just VIAF and LC Authorities. There are other links appearing at the bottom of some Wikipedia pages, most notably Worldcat. The bookmarklet itself could be easily adapted to accommodate these so avoiding a further profusion of bookmarklets, as well as more back up when services are down (VIAF went down twice while I was testing). Adding services to the PHP is a matter of knowing the structure of the RDF, which shouldn’t be too painful.

Improve how errors are dealt with and reported, especially so the bookmarklet handles more of them and prevents the PHP script being called unnecessarily.

Feedback. I appreciate this is highly unlikely to set the world on fire, but I would be interested in any feedback or ideas of how it could be developed. Of course, please do let me know if you come across any mistakes or problems: it’s becoming almost traditional for me to get the most crucial link wrong in blog posts.

I have programmed a simple RDF viewer called RDFV RDF Viewer for viewing RDF. Copy and paste the contents of an RDF turtle or n-triples file into the box. The viewer will let you click on an element to highlight it and other instances of the same value, as well as triples with the same subject.

Why? The viewer has two purposes:

1) to make the analysis of RDF files easier. Although turtle/n-triples (I shall refer only to turtle) files are the easiest RDF files to read (certainly compated to RDF/XML which is impenetrable), they can still be complex, especially when there are lots of blank nodes. In particular, I have been trying to get to grips with BIBFRAME data, which can have things like this:

<http://id.loc.gov/resources/bibs/10342843>

bf:creator _:bnode2049831104 ;

bf:subject _:bnode1676317824 ,

_:bnode942225664 ;

The bnodes to which these refer can be many lines away and it is not trivial to match them up. I tried printing data out and drawing lines between them, but this got lost in secret symbols and incoherent scribbles. I thought there must be a better way. With the viewer, you can click on, for example, “_:bnode1676317824″ and it will highlight it in bold and in red text, and do the same for all other occurences of “_:bnode1676317824″. At the top, the number of instances will be shown.

2) To make demonstrating and training easier. I am trying to keep colleagues up to speed with linked data and BIBFRAME and it is especially useful I think to show people real linked data as much as possible. RDF is a trifle intimidating to say the least as first glance, so it is helpful to isolate and highlight sections if possible. As well as highlighting bnodes so you can show how one bit relates to another, the viewer also highlights all the triples with the same subject. So, in the above example, all the triples with the subject “<http://id.loc.gov/resources/bibs/10342843>” will be displayed with a shaded background colour when you click on it.

3. As a way of engaging with RDF, linked data, and BIBFRAME in particular, as well as programming in general. This is my first defence against the person who points out that Tool X already exists to do this and is in fact much better.

How to use it. Simply copy and paste the contents of an RDF turtle file into the box and click Submit. The triples will then be displayed underneath. Click on the data itself to highlight various bits. There are two sets of sample data included, both for the same book (Models for decision by C.M. Berners-Lee):

Technical stuff. The code accepts anything which is in turtle or turtle-like in nature. However, there are probably some strange characters it won’t like and some data structures that will fool it. In particular, it tries to shoehorn everything into three main columns: subject, predicate, object; there are also punctuation and language but the point is that nested triples won’t necessarily look particularly impressive. Moreover, it is only a viewer and it does not understand RDF: it can’t make inferences based on the data and won’t even know, given the prefix statement

@prefix bf: <http://bibframe.org/vocab/> .

that

bf:subject

and

http://bibframe.org/vocab/subject

are the same thing.
RDFV will not rewrite or abbreviate the data, with two exceptions: it formats its own white space and, if the punctuation at the end of a line immediately follows some data, it inserts a space before it, more for its own parsing ability as anything else. Please do let me know if you see anything wrong with it.

The viewer is written entirely in Javascript.

Improvements. I am planning and/or hoping to do some further improvements, including some greater control over the formatting from the page itself, some more examples (especially as BIBFRAME evolves).

I have finally completed a multiple record MARC Record Viewer. This has been rather long in the making but is essentially a quick and practical tool for looking at and assessing MARC records without having to load them into specialist software like MARCEdit or an LMS. It is essentially the same as the viewer built for my Codecademy project except that:

It reads multiple records in one file, rather than just one, and provides a count.

It has an input box so the records don’t have to be hard-coded into the script.

It is written in client-side Javascript, so you can view source and see how it works, copy it, and do what you like with it (although I would love to know if you do so). I quite defiantly haven’t used JQuery for this, which would probably have made the whole thing a bit easier; instead it uses proper old skool DOM scripting. It uses a minimal amount of CSS, in two files: a generic one, and one that roughly mimics how MARC records look in an Aleph editing screen. It should be fairly trivial to change this file to suit other purposes.
Thank you to those who have already have a shufti at earlier versions of this, especially on different browsers, and provided feedback! Please do let me know if you have any comments on this, suggestions for improvements, or if you come across errors. I have some ideas for improvements, mainly for making user input easier, and offering different formatting of results. I hope to start using JQuery for these too, and perhaps a later conversion of the whole thing would be in order.

For a Dev8d session I did with Owen Stephens in February I presented data for a single book and followed how it had changed as standards changed, trying above to explain to non-cataloguers why catalogue records look and work the way they do. At least one person found it useful. I am now drafting an internal session at work on the future of cataloguing and am planning to take a similar approach to briefly explain how we got to AARC2 and MARC21, and where we are heading. I took the example I used at Dev8d and hand-crafted some RDA examples, obtained a raw .mrc MARC21 file, and used the RDF from Worldcat to come up with a linked data example.

I have tried to avoid notes on the examples themselves. However, do note the following: the examples only generally use the same simple set of data elements, basically the bits you might find on a basic catalogue card: no subjects, few notes, etc.; the book is quite old so there is no ISBN anyway. The original index card is from our digitised card catalogue. The linked data example was compiled by copying the RDFa from the Worldcat page for the book; this was then put into this RDFa viewer (suggested by Manu Sporny) to extract the raw RDF/Turtle; I manually hacked this further to replace full URIs with prefixes as much as possible in an attempt to make it more readable (I suspect this is where some errors may have crept in). The example itself is of course a conversion from an AARC2/MARC21 record. C.M. Berners-Lee is Tim’s dad.

Feel free to use this and to point out mistakes. I would particularly welcome anyone spotting anything amiss in the RDA and linked data, where I am sure I have mangled the punctuation in both.

Harvard Citation

Berners-Lee, C.M. (ed.) 1965, Models For Decision: a Conference under the Auspices of the United Kingdom Automation Council Organised by the British Computer Society and the Operational Research Society, English Universities Press, London.

Pre-AACR2 on Index Card

BERNERS-LEE, C.M., [ed.].

Models for decision; a conference under the auspices of the United Kingdom Automation Council organised by the British Computer Society and the Operational Research Society.

London, 1965.

x, 149p. illus. 22cm.

AACR2 on Index Card

Models for decision : a conference under the auspices of the United Kingdom Automation Council organised by the British Computer Society and the Operational Research Society / edited by C.M. Berners-Lee. -- London : English Universities Press, 1965.

At Mashcat on 5 July in Cambridge I gave an afternoon session on getting computer readable information from the textual information held in MARC21 300 fields using Javascript and regular expressions. I intended this to be useful for cataloguers who might have done some of Codecademy’s Code Year programme as well as an exploration of how data is entered into catalogue records, its problems, and potential solutions.

AACR2/MARC (and RDA) records store much quantitative information as text, usually as a number followed by units, e.g. “31 cm.” or “xi, 300 p”. This is not easy for computers to deal with. For instance, a computer programme cannot compare two sizes- e.g. “23 cm.” and “25 cm.”- without first extracting a number out of the string (23 and 25) as well as determining the units used (cm). In some cases, units might vary: in AARC2 books below 10 cm. are measured in mm., and non-book materials are often measured in inches (abbreviated to in.). Potential uses for better quantitative data in the 300$c include planning shelving for reclassification and more easily finding books by size or range.

Before the session, I sketched out a possible solution using Javascript and regular expressions to make this conversion for dimensions in the 300$c. I have a put up a version of A script to find the size of an item in mm. based on the 300$c, with the addition of an extra row which you can fill in to test your own examples without having to edit the script.

If you do want to look at how it works or try editing it yourself you can view source, copy all the HTML, then paste it into a text editor. Save it, then open the file using a browser to test it. Refresh the browser when you change the file.

It starts with a declaration of an array of examples to be tested: you can alter this with your own if you prefer. text_to_mm is the function that does all the work. It takes in the text from a 300$c, converts fractions (e.g. 4 3/4) to decimals (4.75), finds a number, finds a unit, then performs calculations on the size depending on what the unit is to produce a figure to a standard figure in mm. At Mashcat, Owen Stephens managed to plug an adaptation of this script into Blacklight to create an index of book sizes. Using this he could do things like find the most common sizes or the largest book in a collection.

The main focus of my session, however, was on a similar script to figure out how many actual pages there are in a book, given the contents of a 300$a, e.g. “300 p.”, “ix, 350 p.”, “100 p., [45] leaves of plates” (a page being one side of a sheet of paper; a leaf being a sheet of paper only printed on one side, so therefore counting as two pages). I have also published a version of A script to find the absolute no. of pages based on the 300$a with the similar addition of a row for easy user testing. Potential uses for recording page numbers rather than pagination include planning shelving space, easier to understand displays for users, and finding books of specified lengths.

The main function is called text_to_pages. The first thing it does is convert any Roman numerals to Arabic ones. The heavy lifting for this is a function by Stephen Levithan which does the actual number conversion. However, we still need to identify and extract the Roman numerals from the pagination in order to convert them. This line does the extraction and makes a list of the Roman numerals:

var roman_texts=text.match(/[ivxlc]*[, ]/g);

The session I gave concentrated on regular expressions (a bit like the wildcards you use on library databases but turned up to eleven) which in all cases here are contained within slashes, and I made a simple introductory guide to regular expressions (.docx). There are many guides to regular expressions on the web too, and useful testers to play with such as this one. The regular expression in the line above can be broken down as follows:

[ivxlc] uses square brackets to look for any one of the characters listed within them.

The following * means to look for any number of these in a row

[, ] any of a comma or a space, again using square brackets. Obviously these characters are not used in Roman numerals but they are a convenient method of isolating these characters as numbers rather, say, the “l” in leaves which would also match otherwise.

The next few lines work through the list, replace any instances of [, ] with “” (i.e. nothing) to leave the bear Roman numerals, convert all the numbers in the list using Stephen Levithan’s functions, then do the replacements on the pagination given in text:

Like the size script above, the rest of conversion needs to do two things: find the numbers and find the units. To do this we need to find the sequences involved. While this is easy with something like “24 p.” (number is 24, unit is p) or even “xv leaves” (number is 15, unit is leaves), it becomes troublesome when you get something like “23, 100 p.”: the first number is 23 but there is no unit associated with it, only a comma to signify that it is a sequence at all. The following lines try and get round this problem but looking for sequences where the comma appears to be the unit and then looking ahead to find the next unit. In the “23, 100 p.” example the script would keep looking forward past the 100 until it gets to the “p”.

\d* any number of digits. \d is any digit and * looks for any number of them, followed by

, a comma

So as long as the script finds any sequences of numbers followed by a comma, it will carry on making the replacement underneath it. The replacement line itself looks for

\d* any number of digits again, followed by

, a comma

.*? which is . any character * any number of times. The ? makes sure that the smallest matching group of characters is matched; otherwise the expression will think that the units corresponding to the number 15 in the pagination “15, 25 p., 50 leaves” is “leaves” rather than “p”.

p|leaves either p or leaves. The pipe means either match on the left of it or the right of it. Because this is in a set of round brackets, the pipe only applies there, rather than the whole expression.

Brackets also capture subsets which is really useful here: the first set of () brackets captures the number of pages and stores it as $1, the second set captures everything between the comma and the end of the units as $2, the third set captures the units only, either “p” or “leaves”, and stores it as $3. So in the example “15, 25 p., 50 leaves”, $1 is “15″, $2 is ” 25 p”, and $3 is “p”. The replacement puts these back in a different order, i.e. “$1 $3 $2″ which would be “15 p 25 p”.

Now that all the sequences will be in number-unit pairs, we can get on with making a list of them to work through:

// Find sequences
var sequences = text.match(/\d+.*?(,|p|leaves)/g);

This looks for:

\d+ at least one digit

.*? any number of any characters, although not being greedy

(,|p|leaves) any of a comma, “p”, or “leaves”. Obviously, if the while loop above has worked, then the comma isn’t needed, but I’ll confess this is a hangover from a previous version of the script…

The next section goes through each of the sequences found and extracts the number and then the unit:

The parseFloat converts the digits as a string to a Javascript number. The regular expression to find the unit is also simple:

(p|leaves) either “p” or “leaves”

If the units are “p”, then the variable pages is incremented by the value of the number found; if “leaves”, then pages is incremented by twice that number.

The programme should cope with the loss of abbreviations in RDA as “p.” is expanded to “pages” but the regular expression to find the units will still find the “p” at the beginning much as it isn’t put off by the full stop after the “p”. It could be expanded to look for other variations and I will do so if I can:

“S.” for German “Seite” or “Seiten”.

“leaf”, as in “1 leaf of plates”

sequences which start in the middle of larger ones, like journal issues with “xii, p. 546-738″. This one will be the most complicated as it goes against the basic flow of the existing code.

I also haven’t properly tested folded sheets or multiple volume works. Other improvements are needed in failing more gracefully when it doesn’t find what it’s expecting: the programme should really test the existence of the arrays it makes before looping through them, but this would make it harder to understand at a glance or demonstrate on screen so I didn’t do it.

The scripts are written in Javascript for several reasons: it is the language that Codecademy focusses on for beginners; it requires no specialist environment, server, or even a web connection: you just need a basic text editor and a browser; it is easy to adapt for a web page if you do manage to build something; and, it is the language I am most confident working in. It would be fairly easy to port to other languages though, and Owen changed the size script with some other modifications to work in Beanscript/Java in Blacklight.

I can’t speak for the attendees, but I learnt a lot, and much was made more clear, from playing around with these scripts and talking to people at Mashcat:

Quite how depended AARC2 and RDA (and consequently MARC21) are on textual information, even for what appears to be quantitative data.

That even for what appears to be standard number-unit data, there are too many complications that make it non trivial to extract data:

fractions (not even decimals) in 300$c

differing units: book sizes in mm. or cm. depending on how big the book is; disc sizes in in.; extent in pages or leaves (or volumes or atlases or sheets…)

sequences with implied units, such as those with commas.

there is frequently a lack of clarity and ambiguity of what is actually being measured:

for books the dimension recorded is normally height (although this is not explicit from a user’s point of view, sometimes it’s height and width, and for a folded sheet it could be all sorts of things); for a disc it’s the diameter.

For the 300$a what’s being recorded is pagination, something entirely different from number of pages. Although important for things like rare books, how important is complete pagination for most users compared to a robust idea of how large a book is? Amazon provide a number of pages. More importantly, how understandable is pagination? During my demonstration, some of my audience of librarians were left cold by the meanings of square brackets for example (and square brackets can mean any number of things depending on context). Perhaps there is room for both.

I suppose this latter point is a potential conclusion. Ed Chamberlain asked me what I thought should be done. I don’t know to be honest. I think, like much of the catalogue record, lots more research is needed to see what users (both human and computer) actually want or need. It should be said that entering pagination is in many ways easier for the cataloguer. However, I do think we need:

quantitative data entered as numbers with clear and standard units. For instance, record all book heights as mm. and convert to cm. for display if needed.

more data elements to properly make clear what is being recorded. Instead of a generic dimension, we need height, width, depth?, diameter, etc. Instead of pagination, we could have separate elements for pagination, number of pages, and number of volumes (50 volumes each of 10 pages is not the same as 4 volumes of 1000 pages each). Obviously all of them wouldn’t be needed for all items.

The research to enable us to choose what to record, why we’re recording it, and for whose benefit would be the best starting point for this as well as many other questions in cataloguing and metadata.

I am not a trained programmer, coding is not part of my job description, and I have little direct access to cataloguing and metadata databases at work outside of normal catalogue editing and talking to the systems team, but I thought it might be worth making the point of how useful programming can be in all sorts of little ways. Of course, the most useful way is in gaining an awareness of how computers work, appreciating why some things might be more tricky than others for the systems team to implement, seeing why MARC21 is a bastard to do anything with even if editing it in a cataloguing module is not really that bad, and how the new world of FRDABRDF is going to be glued together. However, some more practical examples that I managed to cobble together include:

Customizing Classification Web with Greasemonkey. This is a couple of short scripts using Javascript, which is what the default Codeacademy lessons use. Javascript is designed for browers and is a good one to start with as you can do something powerful very quickly with a short script or even a couple of lines (think of all the 90s image rollovers). It’s also easy to have a go if you don’t have your own server, or even if you’re confined to your own PC.

Aleph-formatted country and language codes. I wrote a small PHP script to read the XML files for the MARC21 language and country codes and convert them into an up to date list of preferred codes in a format that Aleph can read, basically a text file which needs line breaks and spaces in the right places. It is easy to tweak or run again in the event of any minor changes. I don’t have this publicly available anywhere though. PHP is not the most elegant language but is relatively easy to dip into if you ever want to go beyond Javascript and do more fancy things, although it can be harder to get access to a server running PHP.

MARC21 .mrc file viewer. I occasionally need to quickly look at raw .mrc files to assess their quality and to figure out what batch changes we want to make before importing them into our catalogue. This is an attempt to create something that I could copy and paste snippets of .mrc files into for a quick look. It is written in PHP and is still under construction. There are other better tools for doing much the same thing to be honest, but coding this myself has had the advantages of forcing me to see how a MARC21 file is put together and realising how fiddly it can be. Try this with an .mrc which has some large 520 or 505 fields in it (there are some zipped ones here, to pick at random) and watch the indicators mysteriously degrade thereafter. I will get to the bottom of this…

The following examples are less useful for my own practical purposes but have been invaluable for learning about metadata and cataloguing, in particular, RDF/linked data. I was very interested in LD when I first heard about it. Being able to actually try something out with it (even if the results are not mind-blowing) rather than just read about it, has been very useful. Both are written in PHP and further details are available from the links: