XQuery/Pachube feed

You want to create a feed for the Pachube application. A Pachube application allows you to store, share & discover realtime sensor, energy and environment data from objects, devices & buildings around the world. This provides a platform for sensor data integration. History gathered by Pachube can be presented in various formats and used by other applications to mashup feeds.

The idea of a feed of the open/closed status of Tower Bridge in London was borrowed from @ni.

A Twitter stream provides the base data for a simple status feed. The RSS feed from this stream is read by an XQuery script, the status deduced from the text and an XML file representing the current status updated.

This XML file has an attached XSLT stylesheet so that when the file is pulled on schedule from the eXist database, it is first transformed on the server-side into the EEML format required for Pachube feeds. As configured on the UWE server, this uses Saxon XSLT-2.

The Pachube interface refreshes the automatic feeds every 15 minutes (for the free service). Since typical bridge lifts last 10 minutes, there is a likelihood that a lift will be missed. The alternative is to push changes to Pachube when detected.

Many amateur weather stations use Weather Display software. This software writes current observations to a space-delimited text file to support interfaces to viewing software, such as the Flash Weather Display Live. The text files are generally web accessible, so that any client has access to this raw data, although it is polite to ask for access.

In this Push implementation, a manual feed is defined in Pachube via the API by POSTing a full EEML document. An XML descriptor file defines the mapping between values in the data file and data streams in the feed. A scheduled XQuery script reads the data file and transforms it via the mapping file to EEML format prior to PUTing to the Pachube API.

If Pachube supported XSLT on the server side, the whole task could be handled by a single XSLT script. For the sake of generalisation, its helpful to provide an interface which allows parameters to be passed to the script but it is not necessary:

Similarly output processing of either the current EEML or a specific datastream's csv history could be provided with a bit of code and XSLT. Since this may require authentication, API keys would have to be stored on this database too. Jobs could be generated and scheduled to implement triggers but this will need a timed pull of the required data.

Code is needed to convert the history feeds provided by Pachube to XML since these are only available in CSV. Once in XML, XSLT can transform to the format required. Of course it would be preferable if Pachube provided XML feeds in addition to the CSV feeds.