Craig's computer system is doing RDF Web crawling. It has a list of URLs from which it will fetch RDF content. It will parse that content and save the resulting RDF Graphs. It will then make available the information it gathered, and some metadata about how the information was gathered. That information will be obtained and used by Dave's machine.

Some more details:

This morning, Craig's machine was erased — there are no old crawl results.

Today Craig's machine only tried to dereference three URLs, as follows.

If we adopt Graphs Design 6.1, there are still many ways to address this scenario. Each section below presents one of these ways; they all use Design 6.1.

These examples do not include as much metadata as one would probably like. In particular, it clients probably SHOULD pay attention to cache management headers like Last-Modified, Expires, ETag, and Cache-Control. Hopefully the examples are detailed enough to show how one could include such header information.