Navigation

User login

RDF Seminar

Submitted by editor on 19 May 1998 - 12:00am

Matthew Dovey reports on the RDF seminar held in the Stakis Hotel, Bath.

On the 8th May, following almost immediately after their MODELS 7 Workshop, UKOLN hosted a half-day seminar entitled “RDF: What is it all about?”. RDF, or Resource Description Framework, is one of the latest TLAs (Three Letter Acronyms) to emerge from the W3C (World Wide Web Consortium), and is of particular pertinence to the library and collection management communities as one of its intended applications is the interchange of catalogue or metadata.

The seminar opened with Renato Iannella, from DSTC Pty Ltd, giving a brief overview of what RDF is and where it came from. RDF is emerging (it is still under development and the form has yet to be finalised) from a number of communities including those working on Platform for Internet Content Selection (PICS), Dublin Core, Uniform Resource Classification (URC), Privacy (P3P) and so forth. It is intended to be an instantiation of the Warwick Framework for metadata description. Its syntax, based on XML, will be instantly recognisable by anyone who has been forced to tackle web page creation in HTML. A typical example might be “<DC:Title>The Future of Metadata</DC:Title>” to represent the title of a particular resource. The definition of the tag <DC:Title> is defined by “importing” what is known as an XML Namespace, i.e. the prefix “DC:” is associated with a particular XML definition file or schema specified by its location in the form of a URI (Uniform Resource Identifier), for example “<? xml:namespace ns="http://metadata.net/DC/1.0/" prefix="DC" ?>”. In this particular example a schema is downloaded from http://metadata.net/DC/1.0/ defining suitable tags for the Dublin Core. A typical RDF file may use more that one such schema, for example, if it wishes to use additional tags to those defined in the Dublin Core.

Brian Kelly from UKOLN then gave a brief outline of various tools for manipulating and editing RDF. These included editors such as DSTC's Reggie, UKOLN's DC-Dot andPrismEd, which was described in a paper presented at the WWW7 Conference. All of these allow metadata to be saved in a suitable RDF file. Applications of RDF in use include Mozilla (which can be regarded as the experimental form of Netscape 5.0), which is using the RDF format for managing site maps, bookmark lists and history lists. IBM has been doing work with RDF, such as its Java Central Station, whose robot for searching the web for interesting Java applications uses RDF for describing the data it collects. Both IBM and Microsoft (amongst others) have created Java applications for manipulating RDF. Brian has kindly placed some of the presentations on the web at <URL: http://www.ukoln.ac.uk/web-focus/events/seminars/what-is-rdf-may1998/>.

Andrew Powell from UKOLN and Dan Brickley from the University of Bristol then joined Brian Kelly and Renato Iannella for a panel session chaired by Rachel Heery (from UKOLN). The main discussion centred around what advantages RDF had over all the other similar standards for metadata interchange that have already emerged (such as MARC, EAD, GRS.1 etc.) and what its relationship with them was. In many ways RDF can be seen as a reinvention of the wheel. Those familiar with the GRS.1 record format in Z39.50, will instantly recognise the basic ideas behind RDF, as they are similar with those behind GRS.1, namely that you have a hierarchical tree of information partitioned by tagged identifiers, and the meaning and format of the tags being defined by importing tag sets (GRS.1) or schemas (RDF). Indeed the overlap with Z39.50 is slightly greater as the web community sees RDF as the foundation for heterogeneous cross-domain searching across the Internet. There was talk of how to convert existing Z39.50 or MARC based data to RDF, this could be distressing as companies like Microsoft and IBM are more likely to see RDF as the future for this sort of application rather than something based on Z39.50, as Dan Brickley commented, slightly tongue in cheek "Z39.50 is regarded as a legacy system, which is a bit distressing if you are currently implementing Z39.50".

However, as became clear, this does not mean that we should ignore all of the work that has already been done in terms of classification and metadata. RDF merely provides a framework for representing and transferring metadata. Its strength and possibly its weakness is in the method it outlines for importing and inheriting different schema. In a well constructed inheritance hierarchy, an application may not necessarily need to know the schema implemented in a given file, as it can traverse the hierarchical tree of inherited schemas back to schemas it does understand and make intelligent guess work based upon this. This is not a new idea – anyone verse in object-oriented architectures will recognise this, and it has been tried before, e.g. GRS.1 format. However, where RDF may win out over its predecessors is in the enthusiasm of the industry to adopt it. However, the strength of RDF will ultimately reside in how well we construct the hierarchy of schemas, and how widely adopted the base schemas in the hierarchy become. RDF does not in itself prevent say Microsoft and Netscape adopting different and incompatible schemas for bookmark files, it merely provides an architecture where they could adopt a compatible format, if they wished. However, it is at this point, namely in building the schemas that the W3C interest ends; they regard that this should be done by the communities involved. So this leaves us with a call to action – if RDF is to be a success in the bibliographic community, we have to bring the experience we already have in metadata to ensure that there are universally accepted base schemas, and that they is a well defined hierarchy of inheritance. If we can achieve this, we may even provide a model of interoperability for the rest of the computer community.