Kingsley Idehen's Blog Data Spacehttp://www.openlinksw.com:443/blog/kidehen@openlinksw.com/blog/I have seen the future and it's full of Linked Data! :-)Kingsley Uyi Idehen <kidehen@openlinksw.com>2018-02-18T05:38:47Zen-us

Data Spaceshttp://www.openlinksw.com:443/blog/kidehen@openlinksw.com/blog/?id=1662http://www.openlinksw.com:443/mt-tb/Http/comments?id=1662http://www.openlinksw.com:443/blog/kidehen@openlinksw.com/blog/gems/rsscomment.xml?:id=16622011-03-01T23:49:26ZThere is increasing coalescence around the idea that HTTP-based Linked Data adds a tangible dimension to the World Wide Web (Web). This Data Dimension grants end-users, power-users, integrators, and developers the ability to experience the Web not solely as a Information Space or Document Space, but now also as a Data Space. Here is a simple What and Why guide covering the essence of Data Spaces. What is a Data Space? A Data Space is a point of presence on a network, where every Data Object (item or entity) is given a Name (e.g., a URI) by which it may be Referenced or Identified. In a Data Space, every Representation of those Data Objects (i.e., every Object Representation) has an Address (e.g., a URL) from which it may be Retrieved (or "gotten"). In a Data Space, every Object Representation is a time variant (that is, it changes over time), streamable, and format-agnostic Resource. An Object Representation is simply a Description of that Object. It takes the form of a graph, pictorially constructed from sets of 3 elements which are themselves named Subject, Predicate, and Object (or SPO); or Entity, Attribute, and Value (or EAV). Each Entity+Attribute+Value or Subject+Predicate+Object set (or triple), is one datum, one piece of data, one persisted observation about a given Subject or Entity. The underlying Schema that defines and constrains the construction of Object Representations is based on Logic, specifically First-Order Logic. Each Object Representation is a collection of persisted observations (Data) about a given Subject, which aid observers in materializing their perception (Information), and ultimately comprehension (Knowledge), of that Subject. Why are Data Spaces important? In the real-world -- which is networked by nature -- data is heterogeneously (or "differently") shaped, and disparately located. Data has been increasing at an alarming rate since the advent of computing; the interWeb simply provides context that makes this reality more palpable and more exploitable, and in the process virtuously ups the ante through increasingly exponential growth rates. We can't stop data heterogeneity; it is endemic to the nature of its producers -- humans and/or human-directed machines. What we can do, though, is create a powerful Conceptual-level "bus" or "interface" for data integration, based on Data Description oriented Logic rather than Data Representation oriented Formats. Basically, it's possible for us to use a Common Logic as the basis for expressing and blending SPO- or EAV-based Object Representations in a variety of Formats (or "dialects"). The roadmap boils down to: Assigning unambiguous Object Names to: Every record (or, in table terms, every row); Every record attribute (or, in table terms, every field or column); Every record relationship (that is, every relationship between one record and another); Every record container (e.g., every table or view in a relational database, every named graph, every spreadsheet, every text file, etc.); Making each Object Name resolve to an Address through which Create, Read, Update, and Delete ("CRUD") operations can be performed against (can access) the associated Object Representation graph.There is increasing coalescence around the idea that HTTP-based Linked Data adds a tangible dimension to the World Wide Web (Web). This Data Dimension grants end-users, power-users, integrators, and developers the ability to experience the Web not solely as a Information Space or Document Space, but now also as a Data Space.

Here is a simple What and Why guide covering the essence of Data Spaces.

What is a Data Space?

A Data Space is a point of presence on a network, where every Data Object (item or entity) is given a Name (e.g., a URI) by which it may be Referenced or Identified.

In a Data Space, every Representation of those Data Objects (i.e., every Object Representation) has an Address (e.g., a URL) from which it may be Retrieved (or "gotten").

In a Data Space, every Object Representation is a time variant (that is, it changes over time), streamable, and format-agnostic Resource.

An Object Representation is simply a Description of that Object. It takes the form of a graph, pictorially constructed from sets of 3 elements which are themselves named Subject,Predicate, and Object (or SPO); or Entity,Attribute, and Value (or EAV). Each Entity+Attribute+Value or Subject+Predicate+Object set (or triple), is one datum, one piece of data, one persisted observation about a given Subject or Entity.

The underlying Schema that defines and constrains the construction of Object Representations is based on Logic, specifically First-Order Logic.
Each Object Representation is a collection of persisted observations (Data) about a given Subject, which aid observers in materializing their perception (Information), and ultimately comprehension (Knowledge), of that Subject.

Why are Data Spaces important?

In the real-world -- which is networked by nature -- data is heterogeneously (or "differently") shaped, and disparately located.

Data has been increasing at an alarming rate since the advent of computing; the interWeb simply provides context that makes this reality more palpable and more exploitable, and in the process virtuously ups the ante through increasingly exponential growth rates.

We can't stop data heterogeneity; it is endemic to the nature of its producers -- humans and/or human-directed machines. What we can do, though, is create a powerful Conceptual-level "bus" or "interface" for data integration, based on Data Description oriented Logic rather than Data Representation oriented Formats. Basically, it's possible for us to use a Common Logic as the basis for expressing and blending SPO- or EAV-based Object Representations in a variety of Formats (or "dialects").

The roadmap boils down to:

Assigning unambiguous Object Names to:

Every record (or, in table terms, every row);

Every record attribute (or, in table terms, every field or column);

Every record relationship (that is, every relationship between one record and another);

Every record container (e.g., every table or view in a relational database, every named graph, every spreadsheet, every text file, etc.);

Making each Object Name resolve to an Address through which Create, Read, Update, and Delete ("CRUD") operations can be performed against (can access) the associated Object Representation graph.

]]>Kingsley Uyi Idehen <kidehen@openlinksw.com>DBpedia + BBC (combined) Linked Data Space Installation Guidehttp://www.openlinksw.com:443/blog/kidehen@openlinksw.com/blog/?id=1656http://www.openlinksw.com:443/mt-tb/Http/comments?id=1656http://www.openlinksw.com:443/blog/kidehen@openlinksw.com/blog/gems/rsscomment.xml?:id=16562011-02-17T22:15:41ZWhat? The DBpedia + BBC Combo Linked Dataset is a preconfigured Virtuoso Cluster (4 Virtuoso Cluster Nodes, each comprised of one Virtuoso Instance; initial deployment is to a single Cluster Host, but license may be converted for physically distributed deployment), available via the Amazon EC2 Cloud, preloaded with the following datasets: DBpedia 3.6 BBC Programmes BBC Music BBC Nature BBC Food Recipes Why? The BBC has been publishing Linked Data from its Web Data Space for a number of years. In line with best practices for injecting Linked Data into the World Wide Web (Web), the BBC datasets are interlinked with other datasets such as DBpedia and MusicBrainz. Typical follow-your-nose exploration using a Web Browser (or even via sophisticated SPARQL query crawls) isn't always practical once you get past the initial euphoria that comes from comprehending the Linked Data concept. As your queries get more complex, the overhead of remote sub-queries increases its impact, until query results take so long to return that you simply give up. Thus, maximizing the effects of the BBC's efforts requires Linked Data that shares locality in a Web-accessible Data Space — i.e., where all Linked Data sets have been loaded into the same data store or warehouse. This holds true even when leveraging SPARQL-FED style virtualization — there's always a need to localize data as part of any marginally-decent locality-aware cost-optimization algorithm. This DBpedia + BBC dataset, exposed via a preloaded and preconfigured Virtuoso Cluster, delivers a practical point of presence on the Web for immediate and cost-effective exploitation of Linked Data at the individual and/or service specific levels. How? To work through this guide, you'll need to start with 90 GB of free disk space. (Only 41 GB will be consumed after you delete the installer archives, but starting with 90+ GB ensures enough work space for the installation.) Install Virtuoso Download Virtuoso installer archive(s). You must deploy the Personal or Enterprise Edition; the Open Source Edition does not support Shared-Nothing Cluster Deployment. Obtain a Virtuoso Cluster license. Install Virtuoso. Set key environment variables and start the OpenLink License Manager, using command (this may vary depending on your shell and install directory): . /opt/virtuoso/virtuoso-enterprise.sh Optional: To keep the default single-server configuration file and demo database intact, set the VIRTUOSO_HOME environment variable to a different directory, e.g., export VIRTUOSO_HOME=/opt/virtuoso/cluster-home/ Note: You will have to adjust this setting every time you shift between this cluster setup and your single-server setup. Either may be made your environment's default through the virtuoso-enterprise.sh and related scripts. Set up your cluster by running the mkcluster.sh script. Note that initial deployment of the DBpedia + BBC Combo requires a 4 node cluster, which is the default for this script. Start the Virtuoso Cluster with this command: virtuoso-start.sh Stop the Virtuoso Cluster with this command: virtuoso-stop.sh Using the DBpedia + BBC Combo dataset Navigate to your installation directory. Download the combo dataset installer script — bbc-dbpedia-install.sh. For best results, set the downloaded script to fully executable using this command: chmod 755 bbc-dbpedia-install.sh Shut down any Virtuoso instances that may be currently running. Optional: As above, if you have decided to keep the default single-server configuration file and demo database intact, set the VIRTUOSO_HOME environment variable appropriately, e.g., export VIRTUOSO_HOME=/opt/virtuoso/cluster-home/ Run the combo dataset installer script with this command: sh bbc-dbpedia-install.sh Verify installation The combo dataset typically deploys to EC2 virtual machines in under 90 minutes; your time will vary depending on your network connection speed, machine speed, and other variables. Once the script completes, perform the following steps: Verify that the Virtuoso Conductor (HTTP-based Admin UI) is in place via: http://localhost:[port]/conductor Verify that the Virtuoso SPARQL endpoint is in place via: http://localhost:[port]/sparql Verify that the Precision Search & Find UI is in place via: http://localhost:[port]/fct Verify that the Virtuoso hosted PivotViewer is in place via: http://localhost:[port]/PivotViewer Related BBC Linked Data Spaces Presentation BBC Music Linked Dataset Snapshot -- PivotViewer Page Screenshot BBC Programmes Linked Dataset Snapshot -- -- PivotViewer Page Screenshot BBC Nature Linked Dataset Snapshot -- PivotViewer Page Screenshot BBC Food Recipes Snapshot -- PivotViewer Page Screenshot My Del.icio.us bookmark collection re. BBC Linked Data Demos Amazon EC2 Snapshots for DBpedia 3.6 + BBC combo -- delivers the BBC and DBpedia dataset combo via a mountable Elastic Block Storage (EBS) device usable with an Amazon Machine Image (AMI) Amazon EC2 Snapshots for DBpedia 3.6 & 3.5 Virtuoso Commercial Edition Download Page Virtuoso Cluster Edition GuideWhat?

The DBpedia + BBC Combo Linked Dataset is a preconfigured Virtuoso Cluster (4 Virtuoso Cluster Nodes, each comprised of one Virtuoso Instance; initial deployment is to a single Cluster Host, but license may be converted for physically distributed deployment), available via the Amazon EC2 Cloud, preloaded with the following datasets:

Why?

The BBC has been publishing Linked Data from its WebData Space for a number of years. In line with best practices for injecting Linked Data into the World Wide Web (Web), the BBC datasets are interlinked with other datasets such as DBpedia and MusicBrainz.

Typical follow-your-nose exploration using a Web Browser (or even via sophisticated SPARQL query crawls) isn't always practical once you get past the initial euphoria that comes from comprehending the Linked Data concept. As your queries get more complex, the overhead of remote sub-queries increases its impact, until query results take so long to return that you simply give up.

Thus, maximizing the effects of the BBC's efforts requires Linked Data that shares locality in a Web-accessible Data Space — i.e., where all Linked Data sets have been loaded into the same data store or warehouse. This holds true even when leveraging SPARQL-FED style virtualization — there's always a need to localize data as part of any marginally-decent locality-aware cost-optimization algorithm.

This DBpedia + BBC dataset, exposed via a preloaded and preconfigured Virtuoso Cluster, delivers a practical point of presence on the Web for immediate and cost-effective exploitation of Linked Data at the individual and/or service specific levels.

How?

To work through this guide, you'll need to start with 90 GB of free disk space. (Only 41 GB will be consumed after you delete the installer archives, but starting with 90+ GB ensures enough work space for the installation.)

Set key environment variables and start the OpenLink License Manager, using command (this may vary depending on your shell and install directory):

. /opt/virtuoso/virtuoso-enterprise.sh

Optional: To keep the default single-server configuration file and demo database intact, set the VIRTUOSO_HOME environment variable to a different directory, e.g.,

export VIRTUOSO_HOME=/opt/virtuoso/cluster-home/

Note: You will have to adjust this setting every time you shift between this cluster setup and your single-server setup. Either may be made your environment's default through the virtuoso-enterprise.sh and related scripts.

Set up your cluster by running the mkcluster.sh script. Note that initial deployment of the DBpedia + BBC Combo requires a 4 node cluster, which is the default for this script.

]]>Kingsley Uyi Idehen <kidehen@openlinksw.com>Virtuoso + DBpedia 3.6 Installation Guide (Update 1)http://www.openlinksw.com:443/blog/kidehen@openlinksw.com/blog/?id=1654http://www.openlinksw.com:443/mt-tb/Http/comments?id=1654http://www.openlinksw.com:443/blog/kidehen@openlinksw.com/blog/gems/rsscomment.xml?:id=16542011-01-25T01:08:55ZWhat is DBpedia? DBpedia is a community effort to provide a contemporary deductive database derived from Wikipedia content. Project contributions can be partitioned as follows: Ontology Construction and Maintenance Dataset Generation via Wikipedia Content Extraction & Transformation Live Database Maintenance & Administration -- includes actual Linked Data loading and publishing, provision of SPARQL endpoint, and traditional DBA activity Internationalization. Why is DBpedia important? Comprising the nucleus of the Linked Open Data effort, DBpedia also serves as a fulcrum for the burgeoning Web of Linked Data by delivering a dense and highly-interlinked lookup database. In its most basic form, DBpedia is a great source of strong and resolvable identifiers for People, Places, Organizations, Subject Matter, and many other data items of interest. Naturally, it provides a fantastic starting point for comprehending the fundamental concepts underlying TimBL's initial Linked Data meme. How do I use DBpedia? Depending on your particular requirements, whether personal or service-specific, DBpedia offers the following: Datasets that can be loaded on your deductive database (also known as triple or quad stores) platform of choice Live browsable HTML+RDFa based entity description pages A wide variety of data formats for importing entity description data into a broad range of existing applications and services A SPARQL endpoint allowing ad-hoc querying over HTTP using the SPARQL query language, and delivering results serialized in a variety of formats A broad variety of tools covering query by example, faceted browsing, full text search, entity name lookups, etc. What is the DBpedia 3.6 + Virtuoso Cluster Edition Combo? OpenLink Software has preloaded the DBpedia 3.6 datasets into a preconfigured Virtuoso Cluster Edition database, and made the package available for easy installation. Why is the DBpedia+Virtuoso package important? The DBpedia+Virtuoso package provides a cost-effective option for personal or service-specific incarnations of DBpedia. For instance, you may have a service that isn't best-served by competing with the rest of the world for ad-hoc query time and resources on the live instance, which itself operates under various restrictions which enable this ad-hoc query service to be provided at Web Scale. Now you can easily commission your own instance and quickly exploit DBpedia and Virtuoso's database feature set to the max, powered by your own hardware and network infrastructure. How do I use the DBpedia+Virtuoso package? Pre-requisites are simply: Functional Virtuoso Cluster Edition installation. Virtuoso Cluster Edition License. 90 GB of free disk space -- you ultimately only need 43 gigs, but this our recommended free disk space size pre installation completion. To install the Virtuoso Cluster Edition simply perform the following steps: Download Software. Run installer Set key environment variables and start the OpenLink License Manager, using command (this may vary depending on your shell): . /opt/virtuoso/virtuoso-enterprise.sh Run the mkcluster.sh script which defaults to a 4 node cluster Set VIRTUOSO_HOME environment variable -- if you want to start cluster databases distinct from single server databases via distinct root directory for database files (one that isn't adjacent to single-server database directories) Start Virtuoso Cluster Edition instances using command: virtuoso-start.sh Stop Virtuoso Cluster Edition instances using command: virtuoso-stop.sh To install your personal or service specific edition of DBpedia simply perform the following steps: Navigate to your installation directory Download Installer script (dbpedia-install.sh) Set execution mode on script using command: chmod 755 dbpedia-install.sh Shutdown any Virtuoso instances that may be currently running Set your VIRTUOSO_HOME environment variable, e.g., to the current directory, via command (this may vary depending on your shell): export VIRTUOSO_HOME=`pwd` Run script using command: sh dbpedia-install.sh Once the installation completes (approximately 1 hour and 30 minutes from start time), perform the following steps: Verify that the Virtuoso Conductor (HTML based Admin UI) is in place via: http://localhost:[port]/conductor Verify that the Precision Search & Find UI is in place via: http://localhost:[port]/fct Verify that DBpedia's Green Entity Description Pages are in place via: http://localhost:[port]/resource/DBpedia Related Amazon EC2 Snapshots for DBpedia 3.6 & 3.5 Virtuoso Commercial Edition Download Page Virtuoso Cluster Edition Guide What is the DBpedia Project?What is DBpedia?

DBpedia is a community effort to provide a contemporary deductive database derived from Wikipedia content. Project contributions can be partitioned as follows:

Why is DBpedia important?

Comprising the nucleus of the Linked Open Data effort, DBpedia also serves as a fulcrum for the burgeoning Web of Linked Data by delivering a dense and highly-interlinked lookup database. In its most basic form, DBpedia is a great source of strong and resolvable identifiers for People, Places, Organizations, Subject Matter, and many other data items of interest. Naturally, it provides a fantastic starting point for comprehending the fundamental concepts underlying TimBL's initial Linked Data meme.

OpenLink Software has preloaded the DBpedia 3.6 datasets into a preconfigured Virtuoso Cluster Edition database, and made the package available for easy installation.

Why is the DBpedia+Virtuoso package important?

The DBpedia+Virtuoso package provides a cost-effective option for personal or service-specific incarnations of DBpedia.

For instance, you may have a service that isn't best-served by competing with the rest of the world for ad-hoc query time and resources on the live instance, which itself operates under various restrictions which enable this ad-hoc query service to be provided at Web Scale.

Now you can easily commission your own instance and quickly exploit DBpedia and Virtuoso's database feature set to the max, powered by your own hardware and network infrastructure.

A simple guide usable by any Javascript developer seeking to exploit SPARQL without hassles.

Why?

SPARQL is a powerful query language, results serialization format, and an HTTP based data access protocol from the W3C. It provides a mechanism for accessing and integrating data across Deductive Database Systems (colloquially referred to as triple or quad stores in Semantic Web and Linked Data circles) -- database systems (or data spaces) that manage proposition oriented records in 3-tuple (triples) or 4-tuple (quads) form.

How?

SPARQL queries are actually HTTP payloads (typically). Thus, using a RESTful client-server interaction pattern, you can dispatch calls to a SPARQL compliant data server and receive a payload for local processing.

Steps:

Determine which SPARQL endpoint you want to access e.g. DBpedia or a local Virtuoso instance (typically: http://localhost:8890/sparql).

If using Virtuoso, and you want to populate its quad store using SPARQL, assign "SPARQL_SPONGE" privileges to user "SPARQL" (this is basic control, more sophisticated WebID based ACLs are available for controlling SPARQL access).

Output

Place the snippet above into the <script/> section of an HTML document to see the query result.

Conclusion

JSON was chosen over XML (re. output format) since this is about a "no-brainer installation and utilization" guide for a Javascript developer that already knows how to use Javascript for HTTP based data access within HTML. SPARQL just provides an added bonus to URL dexterity (delivered via URI abstraction) with regards to constructing Data Source Names or Addresses.

A simple guide usable by any PHP developer seeking to exploit SPARQL without hassles.

Why?

SPARQL is a powerful query language, results serialization format, and an HTTP based data access protocol from the W3C. It provides a mechanism for accessing and integrating data across Deductive Database Systems (colloquially referred to as triple or quad stores in Semantic Web and Linked Data circles) -- database systems (or data spaces) that manage proposition oriented records in 3-tuple (triples) or 4-tuple (quads) form.

How?

SPARQL queries are actually HTTP payloads (typically). Thus, using a RESTful client-server interaction pattern, you can dispatch calls to a SPARQL compliant data server and receive a payload for local processing e.g. local object binding re. PHP.

Steps:

From your command line execute: aptitude search '^PHP26', to verify PHP is in place

Determine which SPARQL endpoint you want to access e.g. DBpedia or a local Virtuoso instance (typically: http://localhost:8890/sparql).

If using Virtuoso, and you want to populate its quad store using SPARQL, assign "SPARQL_SPONGE" privileges to user "SPARQL" (this is basic control, more sophisticated WebID based ACLs are available for controlling SPARQL access).

Conclusion

JSON was chosen over XML (re. output format) since this is about a "no-brainer installation and utilization" guide for a PHP developer that already knows how to use PHP for HTTP based data access. SPARQL just provides an added bonus to URL dexterity (delivered via URI abstraction) with regards to constructing Data Source Names or Addresses.

A simple guide usable by any Python developer seeking to exploit SPARQL without hassles.

Why?

SPARQL is a powerful query language, results serialization format, and an HTTP based data access protocol from the W3C. It provides a mechanism for accessing and integrating data across Deductive Database Systems (colloquially referred to as triple or quad stores in Semantic Web and Linked Data circles) -- database systems (or data spaces) that manage proposition oriented records in 3-tuple (triples) or 4-tuple (quads) form.

How?

SPARQL queries are actually HTTP payloads (typically). Thus, using a RESTful client-server interaction pattern, you can dispatch calls to a SPARQL compliant data server and receive a payload for local processing e.g. local object binding re. Python.

Steps:

From your command line execute: aptitude search '^python26', to verify Python is in place

Determine which SPARQL endpoint you want to access e.g. DBpedia or a local Virtuoso instance (typically: http://localhost:8890/sparql).

If using Virtuoso, and you want to populate its quad store using SPARQL, assign "SPARQL_SPONGE" privileges to user "SPARQL" (this is basic control, more sophisticated WebID based ACLs are available for controlling SPARQL access).

Conclusion

JSON was chosen over XML (re. output format) since this is about a "no-brainer installation and utilization" guide for a Python developer that already knows how to use Python for HTTP based data access. SPARQL just provides an added bonus to URL dexterity (delivered via URI abstraction) with regards to constructing Data Source Names or Addresses.

]]>Kingsley Uyi Idehen <kidehen@openlinksw.com>Rough draft poem: Document, what art thou?http://www.openlinksw.com:443/blog/kidehen@openlinksw.com/blog/?id=1646http://www.openlinksw.com:443/mt-tb/Http/comments?id=1646http://www.openlinksw.com:443/blog/kidehen@openlinksw.com/blog/gems/rsscomment.xml?:id=16462010-11-11T18:44:36ZI am the Data Container, Disseminator, and Canvas. I came to be when the cognitive skills of mankind deemed oral history inadequate. I am transcendent, I take many forms, but my core purpose is constant - Container, Disseminator, and Canvas. I am dexterous, so I can be blank, partitioned horizontally, horizontally and vertically, and if you get moi excited and I'll show you fractals. I am accessible in a number of ways, across a plethora of media. I am loose, so you can access my content too. I am loose in a cool way, so you can refer to moi independent of my content. I am cool in a loose way, so you can refer to my content independent of moi. I am even cool and loose enough to let you figure out stuff from my content including how its totally distinct from moi. But... I am possessive about my coolness, so all Containment, Dissemination, and Canvas requirements must first call upon moi, wherever I might be. So... If you postulate about my demise or irrelevance, across any medium, I will punish you with confusion! Remember... I just told you who I am. Lesson to be learned.. When something tells you what it is, and it is as powerful as I, best you believe it. BTW -- I am Okay with HTTP response code 200 OK :-)I am the Data Container, Disseminator, and Canvas.
I came to be when the cognitive skills of mankind deemed oral history inadequate.
I am transcendent, I take many forms, but my core purpose is constant - Container, Disseminator, and Canvas.
I am dexterous, so I can be blank, partitioned horizontally, horizontally and vertically, and if you get moi excited and I'll show you fractals.
I am accessible in a number of ways, across a plethora of media.
I am loose, so you can access my content too.
I am loose in a cool way, so you can refer to moi independent of my content.
I am cool in a loose way, so you can refer to my content independent of moi.
I am even cool and loose enough to let you figure out stuff from my content including how its totally distinct from moi.But...
I am possessive about my coolness, so all Containment, Dissemination, and Canvas requirements must first call upon moi, wherever I might be.So...
If you postulate about my demise or irrelevance, across any medium, I will punish you with confusion!Remember...
I just told you who I am. Lesson to be learned..
When something tells you what it is, and it is as powerful as I, best you believe it.
BTW -- I am Okay with HTTP response code 200 OK :-)
]]>Kingsley Uyi Idehen <kidehen@openlinksw.com>7 Things Brought to You by HTTP-based Hypermediahttp://www.openlinksw.com:443/blog/kidehen@openlinksw.com/blog/?id=1644http://www.openlinksw.com:443/mt-tb/Http/comments?id=1644http://www.openlinksw.com:443/blog/kidehen@openlinksw.com/blog/gems/rsscomment.xml?:id=16442010-11-08T21:43:28ZThere are some very powerful benefits that accrue from the use of HTTP based Hypermedia. 7 that come to mind immediately include: Structured & Platform Independent Enterprise Data Virtualization -- concrete conceptual level access and provisioning of abstract domain entities such as Customers, Orders, Employees, Products, Countries, Competitors etc. Distributed Application State (REST) -- application state transitions via links Structured Data Representation (Linked Data) -- whole data data representation via links Structured Identity (WebID) -- verifiable distributed identity Structured Profiles (FOAF) -- platform independent profiles for people and organizations Articulation of Structured Value Propositions (GoodRelations) -- Product & Service Offers, Business Entities, Locations, Business Hours, etc. Structured Collaboration Spaces (SIOC) -- Blogs, Wikis, File Sharing, Discussion Forums, Aggregated Feeds, Statuses, Photo Galleries, Polls etc.There are some very powerful benefits that accrue from the use of HTTP based Hypermedia. 7 that come to mind immediately include:

]]>Kingsley Uyi Idehen <kidehen@openlinksw.com>Virtuoso Linked Data Deployment 3-Stephttp://www.openlinksw.com:443/blog/kidehen@openlinksw.com/blog/?id=1641http://www.openlinksw.com:443/mt-tb/Http/comments?id=1641http://www.openlinksw.com:443/blog/kidehen@openlinksw.com/blog/gems/rsscomment.xml?:id=16412010-10-29T22:54:32ZInjecting Linked Data into the Web has been a major pain point for those who seek personal, service, or organization-specific variants of DBpedia. Basically, the sequence goes something like this: You encounter DBpedia or the LOD Cloud Pictorial. You look around (typically following your nose from link to link). You attempt to publish your own stuff. You get stuck. The problems typically take the following form: Functionality confusion about the complementary Name and Address functionality of a single URI abstraction Terminology confusion due to conflation and over-loading of terms such as Resource, URL, Representation, Document, etc. Inability to find robust tools with which to generate Linked Data from existing data sources such as relational databases, CSV files, XML, Web Services, etc. To start addressing these problems, here is a simple guide for generating and publishing Linked Data using Virtuoso. Step 1 - RDF Data Generation Existing RDF data can be added to the Virtuoso RDF Quad Store via a variety of built-in data loader utilities. Many options allow you to easily and quickly generate RDF data from other data sources: Install the Sponger Bookmarklet for the URIBurner service. Bind this to your own SPARQL-compliant backend RDF database (in this scenario, your local Virtuoso instance), and then Sponge some HTTP-accessible resources. Convert relational DBMS data to RDF using the Virtuoso RDF Views Wizard. Starting with CSV files, you can Place them at an HTTP-accessible location, and use the Virtuoso Sponger to convert them to RDF or; Use the CVS import feature to import their content into Virtuoso's relational data engine; then use the built-in RDF Views Wizard as with other RDBMS data. Starting from XML files, you can Use Virtuoso's inbuilt XSLT-Processor for manual XML to RDF/XML transformation or; Leverage the Sponger Cartridge for GRDDL, if there is a transformation service associated with your XML data source, or; Let the Sponger analyze the XML data source and make a best-effort transformation to RDF. Step 2 - Linked Data Deployment Install the Faceted Browser VAD package (fct_dav.vad) which delivers the following: Faceted Browser Engine UI Dynamic Hypermedia Resource Generator delivers descriptor resources for every entity (data object) in the Native or Virtual Quad Stores supports a broad array of output formats, including HTML+RDFa, RDF/XML, N3/Turtle, NTriples, RDF-JSON, OData+Atom, and OData+JSON. Step 3 - Linked Data Consumption & Exploitation Three simple steps allow you, your enterprise, and your customers to consume and exploit your newly deployed Linked Data -- Load a page like this in your browser: http://<cname>[:<port>]/describe/?uri=<entity-uri> <cname>[:<port>] gets replaced by the host and port of your Virtuoso instance <entity-uri> gets replaced by the URI you want to see described -- for instance, the URI of one of the resources you let the Sponger handle. Follow the links presented in the descriptor page. If you ever see a blank page with a hyperlink subject name in the About: section at the top of the page, simply add the parameter "&sp=1" to the URL in the browser's Address box, and hit [ENTER]. This will result in an "on the fly" resource retrieval, transformation, and descriptor page generation. Use the navigator controls to page up and down the data associated with the "in scope" resource descriptor. Related Sample Descriptor Page (what you see post completion of the steps in this post) What is Linked Data, really? Painless Linked Data Generation via URIBurner How To Load RDF Data Into Virtuoso (various methods) Virtuoso Bulk Loader Script for RDF Bulk Loader Script for CSV Wizard based generation of RDF based Linked Data from ODBC accessible Relational DatabasesInjecting Linked Data into the Web has been a major pain point for those who seek personal, service, or organization-specific variants of DBpedia. Basically, the sequence goes something like this:

To start addressing these problems, here is a simple guide for generating and publishing Linked Data using Virtuoso.

Step 1 - RDF Data Generation

Existing RDF data can be added to the Virtuoso RDF Quad Store via a variety of built-in data loader utilities.

Many options allow you to easily and quickly generate RDF data from other data sources:

Install the Sponger Bookmarklet for the URIBurner service. Bind this to your own SPARQL-compliant backend RDF database (in this scenario, your local Virtuoso instance), and then Sponge some HTTP-accessible resources.

Step 3 - Linked Data Consumption & Exploitation

Load a page like this in your browser: http://<cname>[:<port>]/describe/?uri=<entity-uri>

<cname>[:<port>] gets replaced by the host and port of your Virtuoso instance

<entity-uri> gets replaced by the URI you want to see described -- for instance, the URI of one of the resources you let the Sponger handle.

Follow the links presented in the descriptor page.

If you ever see a blank page with a hyperlink subject name in the About: section at the top of the page, simply add the parameter "&sp=1" to the URL in the browser's Address box, and hit [ENTER]. This will result in an "on the fly" resource retrieval, transformation, and descriptor page generation.

Use the navigator controls to page up and down the data associated with the "in scope" resource descriptor.

]]>Kingsley Uyi Idehen <kidehen@openlinksw.com>Virtuoso Linked Data Deployment In 3 Simple Stepshttp://www.openlinksw.com:443/blog/kidehen@openlinksw.com/blog/?id=1642http://www.openlinksw.com:443/mt-tb/Http/comments?id=1642http://www.openlinksw.com:443/blog/kidehen@openlinksw.com/blog/gems/rsscomment.xml?:id=16422010-10-29T22:54:32ZInjecting Linked Data into the Web has been a major pain point for those who seek personal, service, or organization-specific variants of DBpedia. Basically, the sequence goes something like this: You encounter DBpedia or the LOD Cloud Pictorial. You look around (typically following your nose from link to link). You attempt to publish your own stuff. You get stuck. The problems typically take the following form: Functionality confusion about the complementary Name and Address functionality of a single URI abstraction Terminology confusion due to conflation and over-loading of terms such as Resource, URL, Representation, Document, etc. Inability to find robust tools with which to generate Linked Data from existing data sources such as relational databases, CSV files, XML, Web Services, etc. To start addressing these problems, here is a simple guide for generating and publishing Linked Data using Virtuoso. Step 1 - RDF Data Generation Existing RDF data can be added to the Virtuoso RDF Quad Store via a variety of built-in data loader utilities. Many options allow you to easily and quickly generate RDF data from other data sources: Install the Sponger Bookmarklet for the URIBurner service. Bind this to your own SPARQL-compliant backend RDF database (in this scenario, your local Virtuoso instance), and then Sponge some HTTP-accessible resources. Convert relational DBMS data to RDF using the Virtuoso RDF Views Wizard. Starting with CSV files, you can Place them at an HTTP-accessible location, and use the Virtuoso Sponger to convert them to RDF or; Use the CVS import feature to import their content into Virtuoso's relational data engine; then use the built-in RDF Views Wizard as with other RDBMS data. Starting from XML files, you can Use Virtuoso's inbuilt XSLT-Processor for manual XML to RDF/XML transformation or; Leverage the Sponger Cartridge for GRDDL, if there is a transformation service associated with your XML data source, or; Let the Sponger analyze the XML data source and make a best-effort transformation to RDF. Step 2 - Linked Data Deployment Install the Faceted Browser VAD package (fct_dav.vad) which delivers the following: Faceted Browser Engine UI Dynamic Hypermedia Resource Generator delivers descriptor resources for every entity (data object) in the Native or Virtual Quad Stores supports a broad array of output formats, including HTML+RDFa, RDF/XML, N3/Turtle, NTriples, RDF-JSON, OData+Atom, and OData+JSON. Step 3 - Linked Data Consumption & Exploitation Three simple steps allow you, your enterprise, and your customers to consume and exploit your newly deployed Linked Data -- Load a page like this in your browser: http://<cname>[:<port>]/describe/?uri=<entity-uri> <cname>[:<port>] gets replaced by the host and port of your Virtuoso instance <entity-uri> gets replaced by the URI you want to see described -- for instance, the URI of one of the resources you let the Sponger handle. Follow the links presented in the descriptor page. If you ever see a blank page with a hyperlink subject name in the About: section at the top of the page, simply add the parameter "&sp=1" to the URL in the browser's Address box, and hit [ENTER]. This will result in an "on the fly" resource retrieval, transformation, and descriptor page generation. Use the navigator controls to page up and down the data associated with the "in scope" resource descriptor. Related Sample Descriptor Page (what you see post completion of the steps in this post) What is Linked Data, really? Painless Linked Data Generation via URIBurner How To Load RDF Data Into Virtuoso (various methods) Virtuoso Bulk Loader Script for RDF Bulk Loader Script for CSV Wizard based generation of RDF based Linked Data from ODBC accessible Relational DatabasesInjecting Linked Data into the Web has been a major pain point for those who seek personal, service, or organization-specific variants of DBpedia. Basically, the sequence goes something like this:

To start addressing these problems, here is a simple guide for generating and publishing Linked Data using Virtuoso.

Step 1 - RDF Data Generation

Existing RDF data can be added to the Virtuoso RDF Quad Store via a variety of built-in data loader utilities.

Many options allow you to easily and quickly generate RDF data from other data sources:

Install the Sponger Bookmarklet for the URIBurner service. Bind this to your own SPARQL-compliant backend RDF database (in this scenario, your local Virtuoso instance), and then Sponge some HTTP-accessible resources.

Step 3 - Linked Data Consumption & Exploitation

Load a page like this in your browser: http://<cname>[:<port>]/describe/?uri=<entity-uri>

<cname>[:<port>] gets replaced by the host and port of your Virtuoso instance

<entity-uri> gets replaced by the URI you want to see described -- for instance, the URI of one of the resources you let the Sponger handle.

Follow the links presented in the descriptor page.

If you ever see a blank page with a hyperlink subject name in the About: section at the top of the page, simply add the parameter "&sp=1" to the URL in the browser's Address box, and hit [ENTER]. This will result in an "on the fly" resource retrieval, transformation, and descriptor page generation.

Use the navigator controls to page up and down the data associated with the "in scope" resource descriptor.

A simple guide usable by any Perl developer seeking to exploit SPARQL without hassles.

Why?

SPARQL is a powerful query language, results serialization format, and an HTTP based data access protocol from the W3C. It provides a mechanism for accessing and integrating data across Deductive Database Systems (colloquially referred to as triple or quad stores in Semantic Web and Linked Data circles) -- database systems (or data spaces) that manage proposition oriented records in 3-tuple (triples) or 4-tuple (quads) form.

How?

SPARQL queries are actually HTTP payloads (typically). Thus, using a RESTful client-server interaction pattern, you can dispatch calls to a SPARQL compliant data server and receive a payload for local processing.

Steps:

Determine which SPARQL endpoint you want to access e.g. DBpedia or a local Virtuoso instance (typically: http://localhost:8890/sparql).

If using Virtuoso, and you want to populate its quad store using SPARQL, assign "SPARQL_SPONGE" privileges to user "SPARQL" (this is basic control, more sophisticated WebID based ACLs are available for controlling SPARQL access).

Conclusion

CSV was chosen over XML (re. output format) since this is about a "no-brainer installation and utilization" guide for a Perl developer that already knows how to use Perl for HTTP based data access within HTML. SPARQL just provides an added bonus to URL dexterity (delivered via URI abstraction) with regards to constructing Data Source Names or Addresses.

A simple guide usable by any Ruby developer seeking to exploit SPARQL without hassles.

Why?

SPARQL is a powerful query language, results serialization format, and an HTTP based data access protocol from the W3C. It provides a mechanism for accessing and integrating data across Deductive Database Systems (colloquially referred to as triple or quad stores in Semantic Web and Linked Data circles) -- database systems (or data spaces) that manage proposition oriented records in 3-tuple (triples) or 4-tuple (quads) form.

How?

SPARQL queries are actually HTTP payloads (typically). Thus, using a RESTful client-server interaction pattern, you can dispatch calls to a SPARQL compliant data server and receive a payload for local processing e.g. local object binding re. Ruby.

Steps:

From your command line execute: aptitude search '^ruby', to verify Ruby is in place

Determine which SPARQL endpoint you want to access e.g. DBpedia or a local Virtuoso instance (typically: http://localhost:8890/sparql).

If using Virtuoso, and you want to populate its quad store using SPARQL, assign "SPARQL_SPONGE" privileges to user "SPARQL" (this is basic control, more sophisticated WebID based ACLs are available for controlling SPARQL access).

Conclusion

CSV was chosen over XML (re. output format) since this is about a "no-brainer installation and utilization" guide for a Ruby developer that already knows how to use Ruby for HTTP based data access. SPARQL just provides an added bonus to URL dexterity (delivered via URI abstraction) with regards to constructing Data Source Names or Addresses.

]]>Kingsley Uyi Idehen <kidehen@openlinksw.com>Simple Virtuoso Installation & Utilization Guide for SPARQL Users (Update 5)http://www.openlinksw.com:443/blog/kidehen@openlinksw.com/blog/?id=1647http://www.openlinksw.com:443/mt-tb/Http/comments?id=1647http://www.openlinksw.com:443/blog/kidehen@openlinksw.com/blog/gems/rsscomment.xml?:id=16472011-01-16T07:06:21ZWhat is SPARQL? A declarative query language from the W3C for querying structured propositional data (in the form of 3-tuple [triples] or 4-tuple [quads] records) stored in a deductive database (colloquially referred to as triple or quad stores in Semantic Web and Linked Data parlance). SPARQL is inherently platform independent. Like SQL, the query language and the backend database engine are distinct. Database clients capture SPARQL queries which are then passed on to compliant backend databases. Why is it important? Like SQL for relational databases, it provides a powerful mechanism for accessing and joining data across one or more data partitions (named graphs identified by IRIs). The aforementioned capability also enables the construction of sophisticated Views, Reports (HTML or those produced in native form by desktop productivity tools), and data streams for other services. Unlike SQL, SPARQL includes result serialization formats and an HTTP based wire protocol. Thus, the ubiquity and sophistication of HTTP is integral to SPARQL i.e., client side applications (user agents) only need to be able to perform an HTTP GET against a URL en route to exploiting the power of SPARQL. How do I use it, generally? Locate a SPARQL endpoint (DBpedia, LOD Cloud Cache, Data.Gov, URIBurner, others), or; Install a SPARQL compliant database server (quad or triple store) on your desktop, workgroup server, data center, or cloud (e.g., Amazon EC2 AMI) Start the database server Execute SPARQL Queries via the SPARQL endpoint. How do I use SPARQL with Virtuoso? What follows is a very simple guide for using SPARQL against your own instance of Virtuoso: Software Download and Installation Data Loading from Data Sources exposed at Network Addresses (e.g. HTTP URLs) using very simple methods Actual SPARQL query execution via SPARQL endpoint. Installation Steps Download Virtuoso Open Source or Virtuoso Commercial Editions Run installer (if using Commercial edition of Windows Open Source Edition, otherwise follow build guide) Follow post-installation guide and verify installation by typing in the command: virtuoso -? (if this fails check you've followed installation and setup steps, then verify environment variables have been set) Start the Virtuoso server using the command: virtuoso-start.sh Verify you have a connection to the Virtuoso Server via the command: isql localhost (assuming you're using default DB settings) or the command: isql localhost:1112 (assuming demo database) or goto your browser and type in: http://<virtuoso-server-host-name>:[port]/conductor (e.g. http://localhost:8889/conductor for default DB or http://localhost:8890/conductor if using Demo DB) Go to SPARQL endpoint which is typically -- http://<virtuoso-server-host-name>:[port]/sparql Run a quick sample query (since the database always has system data in place): select distinct * where {?s ?p ?o} limit 50 . Troubleshooting Ensure environment settings are set and functional -- if using Mac OS X or Windows, so you don't have to worry about this, just start and stop your Virtuoso server using native OS services applets If using the Open Source Edition, follow the getting started guide -- it covers PATH and startup directory location re. starting and stopping Virtuoso servers. Sponging (HTTP GETs against external Data Sources) within SPARQL queries is disabled by default. You can enable this feature by assigning "SPARQL_SPONGE" privileges to user "SPARQL". Note, more sophisticated security exists via WebID based ACLs. Data Loading Steps Identify an RDF based structured data source of interest -- a file that contains 3-tuple / triples available at an address on a public or private HTTP based network Determine the Address (URL) of the RDF data source Go to your Virtuoso SPARQL endpoint and type in the following SPARQL query: DEFINE GET:SOFT "replace" SELECT DISTINCT * FROM <RDFDataSourceURL> WHERE {?s ?p ?o} All the triples in the RDF resource (data source accessed via URL) will be loaded into the Virtuoso Quad Store (using RDF Data Source URL as the internal quad store Named Graph IRI) as part of the SPARQL query processing pipeline. Note: the data source URL doesn't even have to be RDF based -- which is where the Virtuoso Sponger Middleware comes into play (download and install the VAD installer package first) since it delivers the following features to Virtuoso's SPARQL engine: Transformation of data from non RDF data sources (file content, hypermedia resources, web services output etc..) into RDF based 3-tuples (triples) Cache Invalidation Scheme Construction -- thus, subsequent queries (without the define get:soft "replace" pragma will not be required bar when you forcefully want to override cache). If you have very large data sources like DBpedia etc. from CKAN, simply use our bulk loader . SPARQL Endpoint Discovery Public SPARQL endpoints are emerging at an ever increasing rate. Thus, we've setup up a DNS lookup service that provides access to a large number of SPARQL endpoints. Of course, this doesn't cover all existing endpoints, so if our endpoint is missing please ping me. Here are a collection of commands for using DNS-SD to discover SPARQL endpoints: dns-sd -B _sparql._tcp sparql.openlinksw.com -- browse for services instances dns-sd -Z _sparql._tcp sparql.openlinksw.com -- output results in Zone File format Related Using HTTP from Ruby -- you can just make SPARQL Protocol URLs re. SPARQL Using SPARQL Endpoints via Ruby -- Ruby example using DBpedia endpoint Interactive SPARQL Query By Example (QBE) tool -- provides a graphical user interface (as is common in SQL realm re. query building against RDBMS engines) that works with any SPARQL endpoint Other methods of loading RDF data into Virtuoso Virtuoso Sponger -- architecture and how it turns a wide variety of non RDF data sources into SPARQL accessible data Using OpenLink Data Explorer (ODE) to populate Virtuoso -- locate a resource of interest; click on a bookmarklet or use context menus (if using ODE extensions for Firefox, Safari, or Chrome); and you'll have SPARQL accessible data automatically inserted into your Virtuoso instance. W3C's SPARQLing Data Access Ingenuity -- an older generic SPARQL introduction post Collection of SPARQL Query Examples -- GoodRelations (Product Offers), FOAF (Profiles), SIOC (Data Spaces -- Blogs, Wikis, Bookmarks, Feed Collections, Photo Galleries, Briefcase/DropBox, AddressBook, Calendars, Discussion Forums) Collection of Live SPARQL Queries against LOD Cloud Cache -- simple and advanced queries.What is SPARQL?

A declarative query language from the W3C for querying structured propositional data (in the form of 3-tuple [triples] or 4-tuple [quads] records) stored in a deductive database (colloquially referred to as triple or quad stores in Semantic Web and Linked Data parlance).

SPARQL is inherently platform independent. Like SQL, the query language and the backend database engine are distinct. Database clients capture SPARQL queries which are then passed on to compliant backend databases.

Why is it important?

Like SQL for relational databases, it provides a powerful mechanism for accessing and joining data across one or more data partitions (named graphs identified by IRIs). The aforementioned capability also enables the construction of sophisticated Views, Reports (HTML or those produced in native form by desktop productivity tools), and data streams for other services.

Unlike SQL, SPARQL includes result serialization formats and an HTTP based wire protocol. Thus, the ubiquity and sophistication of HTTP is integral to SPARQL i.e., client side applications (user agents) only need to be able to perform an HTTP GET against a URL en route to exploiting the power of SPARQL.

Installation Steps

Follow post-installation guide and verify installation by typing in the command: virtuoso -? (if this fails check you've followed installation and setup steps, then verify environment variables have been set)

Start the Virtuoso server using the command: virtuoso-start.sh

Verify you have a connection to the Virtuoso Server via the command: isql localhost (assuming you're using default DB settings) or the command: isql localhost:1112 (assuming demo database) or goto your browser and type in: http://<virtuoso-server-host-name>:[port]/conductor (e.g. http://localhost:8889/conductor for default DB or http://localhost:8890/conductor if using Demo DB)

Go to SPARQL endpoint which is typically -- http://<virtuoso-server-host-name>:[port]/sparql

All the triples in the RDF resource (data source accessed via URL) will be loaded into the Virtuoso Quad Store (using RDF Data Source URL as the internal quad store Named Graph IRI) as part of the SPARQL query processing pipeline.

Note: the data source URL doesn't even have to be RDF based -- which is where the Virtuoso Sponger Middleware comes into play (download and install the VAD installer package first) since it delivers the following features to Virtuoso's SPARQL engine:

If you have very large data sources like DBpedia etc. from CKAN, simply use our bulk loader .

SPARQL Endpoint Discovery

Public SPARQL endpoints are emerging at an ever increasing rate. Thus, we've setup up a DNS lookup service that provides access to a large number of SPARQL endpoints. Of course, this doesn't cover all existing endpoints, so if our endpoint is missing please ping me.

Here are a collection of commands for using DNS-SD to discover SPARQL endpoints:

Virtuoso Sponger -- architecture and how it turns a wide variety of non RDF data sources into SPARQL accessible data

Using OpenLink Data Explorer (ODE) to populate Virtuoso -- locate a resource of interest; click on a bookmarklet or use context menus (if using ODE extensions for Firefox, Safari, or Chrome); and you'll have SPARQL accessible data automatically inserted into your Virtuoso instance.