Context is everything

Why is it so hard to share information between government databases? One theory gaining ground in federal interoperability circles is that the language used to translate the information is missing a critical element: verbs.

Database designers, in their relentless efficiency, allotted precisely defined individual data fields to contain all the data. In large part, they jettisoned any information about how those fields related to one another and to the agency that kept the information in the first place. In other words, they ditched the verbs that tied all the data elements together.

And they will need those verbs to share the information within their databases, especially if they want to share data on an ad hoc basis, said Lucian Russell, who heads Expert Reasoning and Decisions.

Russell spoke at a conference on information sharing held earlier this month by the CIO Council's Semantic Interoperability Community of Practice (SiCOP).

Agencies want to share information with other agencies and, more important, avoid the drudgery of setting up individual point-to-point connections to do so.

This is where semantic interoperability comes in. Researchers such as Russell are developing tools and techniques that consider the meaning of data as a key to how it could be reused. If the data could somehow describe itself in a machine-readable way, it could be reused without human intervention.

'We're looking at a change in the way computing gets done, which is less information-centric and more knowledge-centric,' said Mills Davis, founder of consulting firm Project10x and author of a recent report on such semantic technologies, 'Semantic Wave 2008' (GCN.com/965).

'We've always put logic into programming.

But now, the representation of what we think we know ' and the rules about reasoning about it ' is being put into data structures, so a lot of different programs can actually access and play with these things,' Davis said.

Although the idea of semantic interoperability might seem abstract, the February SiCOP meeting showed a few examples of how it can work and even how federal agencies are putting the idea into use.

Russell described the traditional way that database creation has been taught: The designer isolates the information that needs to be captured, then creates a schema that captures all the relationships among the data elements.

But once the database is created, the schema typically is not included with the data.

The downside here is that someone trying to understand the database later would have no way of deciphering what relationship all the elements have to one another.

'What is thrown out in the creation of a database schema is how a lot of this data is related,' Russell said.

Such knowledge can also be considered the context of the data, said Steve Ray, a chief of the manufacturing systems integration division at the National Institute of Standards and Technology, during a question-and-answer session at the conference. 'Context is nothing more than a collection of all the unstated assumptions you have. If you manage to get them all down, you're then context-free. You can then start talking about semantic interoperability.'

Russell advocated the development of data descriptors, which would define the relationship structure of databases, using the verbs database creators omitted.

'All this information has been destroyed,' he said. 'You're going to have to recreate it.'

With descriptors in place, work can then be done to make databases start to recognize one another's data. Selmer Bringsjord, a doctoral student at Rensselaer Polytechnic Institute, presented his doctoral work on what he called 'the database schema mismatch problem.'

Stock example Bringsjord gave an example: Three databases track the prices of stocks at the close of the market each day. One database may have one entry for every stock on each day. A second database may have one entry for every day, with each entry being the closing price for every stock. A third database has a table for each stock, and each entry contains the closing price and date for that table.

Although there is a relationship between these databases, he said, such a relationship would not be accessible through a single query rendered by the Structured Query Language, the standard language for relational databases.

One approach to querying multiple databases that researchers have formulated would be to use the Interoperable Database Language (IDL) to build a union of the databases, which each database could use to query the other two.

IDL could map where each data element could be placed in any of the databases.

However, Bringsjord took a different approach, which 'describe[s] what kind of information each system is tracking [and] how that information is structured.' Although this approach might not facilitate the ability to move all the information of one system into another, it does allow one system to query another, even if the datasets of the two systems are incompatible, Bringsjord said.

His approach is to devise bridging axioms that extend the database schemas until they arrive at a common language. If one database tracks maternal relationships, such as Person A is the mother of Person B, an additional axiom could chart a new entity, grandmother, which would be a compound of the mother element.

A mother of another mother would be a grandmother.

'The goal is to travel from one ontology to another one, where at each step we have a bridging axiom,' Bringsjord said.

Using Slate, a tool that helps in the analytical process (GCN.com/964), Bringsjord described how an intelligence analyst could ask if someone participated in a terrorism- related event, such as a bombing. Although the analyst has no information on that particular individual, he does have information on other participants in the bombing and can search the relationships between those individuals documented in other information systems.

Not everyone's databases will have the same terminology, but there are tools that will help find multiple terms that might describe the same thing or identify similar terms.

One tool Russell said could help is WordNet, an English lexical database developed by Princeton University's Cognitive Science Laboratory (GCN.com/962). Partially funded by the National Science Foundation and the Defense Advanced Research Projects Agency, WordNet links words together, offering synonyms, troponyms and other relations for each word searched.

Also potentially useful would be Cyc (GCN.com/963), which is a representational vocabulary, said Michael Witbrock, vice president of research at CycCorp., which oversees the development of Cyc. Like WordNet, Cyc contains hundreds of thousands of words, along with descriptions of how they are related.

NIST's Ray said 'there are a number of fundamental science questions that have to be answered before the engineering can be done on top of things such as semantic interoperability.'

For instance: How do you measure the semantic distance between two concepts? 'We don't have a unit of measurement for that.'

One semantic program that the Air Force and Army use is Revelytix's Knoodl, a combination ontology editor, registry/repository and wiki.

Brooke Stevenson, president at Army IT contractor Spry Enterprises, said the Army's Office of the Chief Information Officer is using Knoodl to establish a service-wide common vocabulary for Core Enterprise Services Domain, an effort to facilitate reuse of Army IT systems.

Knoodl is an online service that allows different parties to set up their own communities, said Michael Lang, co-founder of Revelytix. In the Army's community, the CIO office defined the basic subject areas, such as security, collaboration and networking, Stevenson said.

'The way the Army is currently approaching this is to only define a template for what these services should be,' Stevenson said. 'It's not going to be just top down, but also bottom up. We'll define the minimal amount of information that we'll need in order to make those types of services reusable.'