Archive for the ‘ODBC’ Category

Online bookie Bet365 has released code into the GitHub open-source library to encourage enterprise developers to use the Erlang functional programming language.

The company has used Erlang since 2012 to overcome the challenges of using higher performance hardware to support ever-increasing volumes of web traffic.

“Erlang is a precision tool for developing distributed systems that demand scale, concurrency and resilience. It has been a superb technology choice in a business such as ours that deals in high traffic volumes,” said Chandru Mullaparthi, head of software architecture at Bet365.
…

I checked, the SOAP library is out and the ODBC library is forthcoming.

Cliff’s post ends with this cryptic sentence:

These releases represent the first phase of a support programme that will aim to address each of the major issues surrounding the uptake of Erlang.

Webnodes AS, a company developing a .NET based semantic content management system, today announced the release of a new product called Webnodes Semantic Integration Server.

Webnodes Semantic Integration Server is a standalone product that has two main components: A SPARQL endpoint for traditional semantic use-cases and the full-blown integration server based on the SDShare protocol. SDShare is a new protocol for allowing different software to share and consume data with each other, with minimal amount of setup.

The integration server ships with connectors out of the box for OData- and SPARQL endpoints and any ODBC compatible RDBMS. This means you can integrate many of the software systems on the market with very little work. If you want to support software not compatible with any of the available connectors, you can create custom connectors. In addition to full-blown connectors, the integration server can also push the raw data to another SPARQL endpoint (the internal data format is RDF) or a HTTP endpoint (for example Apache SOLR).

I wonder about the line:

This means you can integrate many of the software systems on the market with very little work.

I think wiring disparate systems together is a better description. To “integrate” systems implies some useful result.

Wiring systems together is a long way from the hard task of semantic mapping, which produces integration of systems.

Rather than asking the usual questions, how to make this faster, more storage, etc., all of which are important, ask the more difficult questions:

In or between which of these elements, would human analysis/judgment have the greatest impact?

Would human analysis/judgment be best made by experts or crowds?

What sort of interface would elicit the best human analysis/judgment? (visual/aural; contest/game/virtual)

Performance with feedback or homeostasis mechanisms?

That is a very crude and uninformed starter set of questions.

Putting higher speed access to more data with better tools at our fingertips expands the questions we can ask of interfaces and our interaction with the data. (Before we ever ask questions of the data.)