Tuesday, August 30, 2016

We are pleased to report that the W3C has embraced Memento for versioning its specifications and its wiki. Completing this effort required collaboration between the W3C and the Los Alamos National Laboratory (LANL) Research Library Prototyping Team. Here we inform others of the brief history of this effort and provide an overview of the technical aspects of the work done to make Memento at the W3C.

At the time, the W3C kept their specifications in CVS. LANL and the W3C began discussions about how to use Memento with their CVS system and other associated web server software. This attempt ran into problems due to permissions issues and other concerns.

By 2014 Yorick Chollet had joined the LANL Prototyping Team. As part of work with the W3C, Yorick produced standalone TimeGate software that could be installed and run by anyone. The W3C had also started work on a web API for their specifications. The decision was made by both groups to develop the TimeGate as a microservice that would provide a Memento interface to the W3C API.

In 2015, Herbert notified the W3C that the latest version of the Memento MediaWiki Extension was available. After some planned updates to the W3C infrastructure, the updated extension was installed in January of 2016, restoring Memento support on their wiki.

By that time the W3C specifications API was nearing completion. Harish and Herbert collaborated with José Kahan at the W3C to ensure that the W3C TimeGate microservice worked with the API. Once testing was complete, the W3C added the Memento-Datetime header and updated Link headers to their resources in order to reference the new TimeGate. At the same time the W3C moved services to HTTPS, requiring HTTPS to be implemented at the TimeGate as well. Now both the W3C specifications and the W3C wiki use Memento.

Details of Memento Support for W3C Specifications

Work on Memento for the W3C Specifications entailed coordination between three components:

Software local to the W3C Apache Web Server that serves these specifications - maintained at the W3C

The diagram below provides an overview of the architecture of the Memento TimeGate microservice. The TimeGate accepts the Accept-Datetime header from Memento clients via HTTP. It then queries the W3C API using an API Handler. The result of that query is then used to discover the best revision of a specification that was active at the datetime expressed in the Accept-Datetime Header.

To demonstrate how these components work together, we will walk through Memento datetime negotiation using the specification for HTML 5 at URI-R https://www.w3.org/TR/html5/ and an Accept-Datetime value of Sat, 24 Apr 2010 15:00:00 GMT.

As shown in the curl request below, the W3C Apache Web server produces the appropriate TimeGate Link header for original resources. Memento clients use the timegate relation in this Link header to discover the URI-G of the TimeGate for this resource.

The Memento TimeGate microservice extracts the shortname from the original URI, html5 in this case. It then queries the W3C API for this shortname directly, receiving a JSON response like the abridged one below. This response contains a version history the specification.

From this JSON response, the TimeGate looks for the version-history array inside the _embedded object. From each entry in that array, it then extracts the uri and date. It then compares the value of the HTTP request's Accept-Datetime header with the URIs and dates from this version history to find the URI-M of the best memento that was active at the Accept-Datetime value.

In the case of our example, the datetime requested is Sat, 24 Apr 2010 15:00:00 GMT. Using the version history from the W3C API, the TimeGate discovers that the URI-M of best memento that was active at the Accept-Datetime value is at http://www.w3.org/TR/2010/WD-html5-20100304/. This URI-M is then used as the value of the Location header of the TimeGate's response. Because the TimeGate has access to the entire version history, it easily generates additional Link relations in its response, filling in the first and last relations in addition to the URI of the timemap. The TimeGate's full response is shown below, with the Location and Link headers in bold.

A Memento client would then interpret the HTTP 302 status code as a redirect and make a subsequent request to the URI-M from the Location header. In the response, the W3C Apache Web server provides the Memento-Datetime header, identifying this resource as a memento. Also provided are the timegate and original relations in the Link header, so further datetime negotiation can occur if necessary.

From this example example, we see that datetime negotiation is now possible for W3C specifications, allowing users to find prior versions of any W3C specification using a given datetime. As seen in the datetime negotiation example above and in the link relations diagram below, the relations in the link header make this possible, even though LANL maintains the TimeGate and the W3C maintains the original resource (current version of specification) and the mementos (past versions of the specification).

And, of course, TimeMaps work as well, with a TimeMap microservice using the W3C API to find the version history of the page. An example TimeMap is shown below.

Details of Memento Support on the W3C Wiki

The W3C is also running the full Memento MediaWiki Extension on their wiki. The full Memento MediaWiki Extension provides TimeGates and TimeMaps as well as other additional information in the Link headers of its HTTP responses. Shown below is an example HTTP response for the original resource https://www.w3.org/wiki/HTML/Elements/link.

Even though the W3C maintains the Apache server holding mementos and original resources, and LANL maintains the systems running the W3C TimeGate software, it is the relations within the Link headers that tie everything together. It is an excellent example of the harmony possible with meaningful Link headers. Memento allows users to negotiate in time with a single web standard, making web archives, semantic web resources, and now W3C specifications all accessible the same way. Memento provides a standard alternative to a series of implementation-specific approaches.

The DocNow has a strong project team and a diverse advisory board, of which I am honored to be a member of. The team has been pretty active on github, slack, Twitter, etc., but those are no substitute for an extended f2f meeting.

Panel #2 featured the personal reflections of activists Reuben Riggs, Kayla Reed, Alexis Templeton, and Rasheen Aldridge, expertly moderated by Jonathan Fenderson. I'm certainly not going to try to summarize their compelling contributions -- you really need to watch the video. One resource I noted was the story of the Palestinian woman giving notes to Ferguson protesters about how to deal with tear gas. I also noted that the activists' use of social media was, at least initially, not entirely focused on Twitter. This has implications because as researchers, we tend to focus on Twitter exclusively, largely because it's the easiest to interact with.

Panel 3 (Yvonne Ng, Stacie Williams, Alexandra Dolan-Mescal, Dexter Thomas) resumed the ethics of discussion from the end of Panel 1 (video). Yvonne worked through a set of examples about archivists / reports including videos (e.g., from YouTube) with PII (see: Ethical Guidelines for Using Video in Human Rights Reporting). The mood in the room at the time was definitely trending to protecting / anonymizing. I asked the question of how to reconcile this level of editing with the guidance from Panel 2, which included (in so many words) "be sure to document everything, including the ugly". I don't think we really successfully addressed this question. Stacie covered the story of aggregating various #WhatIWasWearing tweets and getting consent from the authors. Dexter echoed the issue of consent, drawing from his experience at the LA Times. Alexandra even went as far as saying "it's a surveillance tool", and questioned the archiving process in general.

I was on Panel 4, along with Brooke Foucault-Welles and Deen Freelon (video). I went last and was so focused on my upcoming presentation my notes for my co-panelists are uneven. Deen discussed some of his open source tools, and briefly mentioned the problem of disappearing tweets. I did write down Brooke's closing three points: 1) "data storage is cheap, data usability is expensive" (with some stories of her "data wrangling"), 2) "tradeoff between parsimonious vs. inclusivity", which summarizes nicely as the "stegosaurus problem" -- apparently they were relatively rare but preserved well, and 3) "diversifying data", including the context of the larger platform itself and the observation that the Twitter of 2009 is not the same as the Twitter of 2014.

The second day was a half day, and wasn't recorded. Alexandra lead us in a User Story Map exercise in an effort to further flesh out user requirements. She had four defined user types defined (I didn't write them down), but there was discussion about adding a fifth: the "authority" persona that would use the archive to expose and punish the participants.

We concluded the day with Dan Chudnov giving a short demo of the current tool. I won't really go into details since it is likely to change significantly (they were adamant about it being an early discussion piece), but it is far ahead of tools like twarc for supporting guided exploration. 2016-09-01 Edit: a temporary prototype of the tool is now available:

I think the meeting was very successful, and I'm grateful to the organizers (Desiree Jones-Smith, Bergis Jules, et al.) for including me on the Advisory Board and inviting me to St. Louis. I'll add the video links when they're uploaded (2016-09-12 edit: 4 video links added), and in the mean time you can rewind the #docnowcommunity hashtag to get a feel for the many things I missed (Samantha is keeping a list of resources shared over #docnowcommunity).

Note that Dr. Michele Weigle is not teaching this semester. Obviously there is demand for CS 418/518, but if you're considering CS 734/834 you might be interested in this student's quote from a recent exit exam:

[and] Dr. Nelson’s Information Retrieval course are the two which I feel have prepared me most for job interviews and work in the working world of computer science.

We're not yet sure what WS-DL courses will be offered in Spring 2017, so take advantage of these offerings in the Fall.

Monday, August 15, 2016

In a previous post, we discussed a way to use the existing Memento protocol combined with link headers to access unaltered (raw) archived web content. Interest in unaltered content has grown as more use cases arise for web archives.

Ilya Kremer and David Rosenthal had previously suggested that a new dimension of content negotiation would be necessary to allow clients to access unaltered content. That idea was not originally pursued, because it would have required the standardization of new HTTP headers. At the time, none of us were aware of the standard Prefer header from RFC7240. Prefer can solve this problem in an intuitive way much like their original suggestion of content negotiation.

To recap, most web archives augment mementos when presenting them to the user, often for usability or legal purposes. The figures below show examples of these augmentations.

Figure 2: The UK National Archives adds additional text and a banner to differentiate their mementos from their live counterparts, because their mementos appear in Google search results

Additionally, some archives rewrite links to allow navigation within an archive. This way the end user can visit other pages within the same archive from the same time period. Smaller archives, because of the size of their collections, do not benefit as much from these rewritten links. Of course, for Memento users, these rewritten links are not really required.

The previously proposed solution was based on the use of two TimeGates, one to access augmented content (which is the current default) and an additional one to access unaltered content. In this post, we discuss a less complex method of acquiring raw mementos. This solution provides a standard way to request raw mementos, regardless of web archive software or configuration, and eliminates the need for archive-specific or software-specific heuristics.

The raw-ness of a memento exists in several dimensions, and the level of raw-ness that is required depends on the nature of the application:

No augmented content - The memento should contain no additional HTML, JavaScript, CSS, or text added for usability or any other purpose. Its content should exist as it did on the web at the moment it was captured by the web archive.

No rewritten links - The links should not be rewritten. The links within the memento content should exist as they did on the web at the moment the memento was captured by the web archive.

Original headers - The original HTTP response headers should be available, expressed as X-Archive-Orig-*, like X-Archive-Orig-Content-Type: text/html. Their values should be the same as those of the corresponding headers without the X-Archive-Orig- prefix (e.g. Content-Type) at the moment of capture by the web archive.

We propose a solution that uses the Prefer HTTP request header and the Preference-Applied response header from RFC7240.

Consider a client that prefers a true, raw memento for http://www.cnn.com. Using the Prefer HTTP request header, this client can provide the following request headers when issuing an HTTP HEAD/GET to a memento.

GET /web/20160721152544/http://www.cnn.com/ HTTP/1.1
Host: web.archive.org
Prefer: original-content, original-links, original-headers
Connection: close
As we see above, the client specifies which level of raw-ness it prefers in the memento. In this case, the client prefers a memento with the following features:

original-content - The client prefers that the memento returned contain the same HTML, JavaScript, CSS, and/or text that existed in the original resource at the time of capture.

original-links - The client prefers that the memento returned contain the links that existed in the original resource at the time of capture.

original-headers - The client prefers that the memento response uses X-Archive-Orig-* to express the values of the original HTTP response headers from the moment of capture.

The response also uses the Preference-Applied header to indicate that it is providing the original-headers and the content has its original-links and original-content. It is possible, of course, for a system to satisfy only some of these preferences, and the Preference-Applied header allows the server to indicate which ones.

The Vary header also contains prefer, indicating that clients can influence the memento's response by using this header. The response can then be cached for requests that have the same options in the request headers.

The memento returned contains the original content and the original links, as seen in the figure below, and the original headers provided as X-Archive-Orig-* as shown in the above response.

Figure 3: Seen in this example is a memento with original-content - no banner added - and original-links as seen in the magnified inspector output from Firefox.

If the client issues no Prefer header in the request, then the server can still use the Preference-Applied header to indicate which preferences are met by default. Again, the Vary header indicates that clients can influence the response via the use of the Prefer request header. The Content-Location header indicates the URI-M of the memento. The response headers for such a default memento from the Internet Archive are shown below, with its original headers expressed in the form of X-Archive-Orig-* and bolded for emphasis.

Compared to our previously described approach, this solution is more elegant in its simplicity and intuitiveness. This approach also allows the introduction of other client preferences over time, if such a need would emerge. These preferences can and should be registered in accordance with RFC7240. The client specifies which features of a memento it prefers and the memento itself indicates which features it has satisfied while ensuring its response satisfies those preferred features.

We seek feedback on this solution, including what additional dimensions clients may prefer beyond the three we have specified.