Each graph describing something contains Freebase URLs to be explored. What we want is the ability to load data into our local store while
some query
is running, enabling the dataset to be enlarged as the query makes choices about
how to proceed.

In SPARQL, the dataset
is fixed. No good if you want to write a graph-walking process without some glue
in your favourite programming language. In one way, it's scripting for the web
but in a special way. It's not a sequence of queries and updates; it's changing the collection of graphs, expanding the
RDF dataset
known to the application.

Query 1 : See what's in the graph

Let's first look at what's available at the example URL. That does not
require anything special: it's just a FROM clause (which in ARQ will
content-negotiate for RDF; if you use a web browser you will see an HTML page):

As an experimental feature, consider a new SPARQL keyword "FETCH" which takes a URL, or
a variable bound to a URL by the time that part of the query is reached, and
fetches the graph at that location.

Now we fetch the documents at each of the URLs that are objects of the
blade runner, film.film.starring triples.

FETCH loads the graph and places it in the dataset as a named graph, the name
being the URL is fetched it from. We use GRAPH to access the loaded graph. Done
this way, triples from different sources are kept separately which might be
important in deciding what sources to believe.

This also shows a critical limitation: just placing in a named graph is a
basic requirement for deciding what to believe but really there ought to be a
lot more metadata about the graph, including when it was read, possibly why it
was read (how we got here in the query) etc etc. But we are not an agent system
so we will note this and move on.

We are left with a question: why use (extended) SPARQL? If you're doing it once,
then a web browser is easier. After all, I used one to choose the properties to
follow.

But with a query you can send it to someone else for them to reuse your
knowledge, you can rerun it to look for changes, you can generalise and let the
computer do some brute force search to find things that would take you, the
human, a long time.