Thanks Paul!
the availability of an open source good quality Freebase to RDF
conversion is a very significant contribution. Effectively this makes
available to the world and with the power of SPARQL what Google is
more and more leveraging themselves.
I do wonder why you didnt use standard mapreduce (though i am sure
this is much faster for smaller loads) , but anyway :) good as it is
for that very task.
what is a u.n.a island ? (of can you explain this better? is there an
example * a query-rewriting system that enforces a u.n.a island while
allowing
the use of multiple memorable and foreign keyed names in queries as
does the MQL query engine)
i do hope a community will be started contributing and maintaining
this project, we'll certainly contribute back when we can.
Gio
On Tue, Jan 8, 2013 at 1:37 AM, Paul Houle <ontology2@gmail.com> wrote:
> IÃ­m proud to announce the 1.0 release of Infovore, a complete RDF
> processing system
>
> * a Map/Reduce framework for processing RDF and related data
> * an application that converts a Freebase quad dump into standard-compliant RDF
> * an application which creates consistent subsets of Freebase,
> including compound value types, about subsets of topics that can be
> selected with SPARQL-based rules
> * a query-rewriting system that enforces a u.n.a island while allowing
> the use of multiple memorable and foreign keyed names in queries as
> does the MQL query engine
> * a whole-system test suite that confirms correct operation of the
> above with any triple store supporting the SPARQL protocol
>
> See the documentation here
>
> https://github.com/paulhoule/infovore/wiki
>
> to get started.
>
> The chief advantage of Infovore is that it uses memory-efficient
> streaming processing, making it possible, even easy, to handle
> billion-triple data sets on a computer with as little as 4GB of
> memory.
>
> Future work will likely focus on the validation and processing of the
> official Freebase RDF dump as well as other large RDF data sets.
>