PUBLIC MARKS with tag "knowledge extraction"

10 November 2007 12:30

Unstructured Information Management applications are software systems that analyze large volumes of unstructured information in order to discover knowledge that is relevant to an end user. UIMA is a framework and SDK for developing such applications. An example UIM application might ingest plain text and identify entities, such as persons, places, organizations; or relations, such as works-for or located-at. UIMA enables such an application to be decomposed into components, for example "language identification" -> "language specific segmentation" -> "sentence boundary detection" -> "entity detection (person/place names etc.)". Each component must implement interfaces defined by the framework and must provide self-describing metadata via XML descriptor files. The framework manages these components and the data flow between them. Components are written in Java or C ; the data that flows between components is designed for efficient mapping between these languages. UIMA additionally provides capabilities to wrap components as network services, and can scale to very large volumes by replicating processing pipelines over a cluster of networked nodes.
Apache UIMA is an Apache-licensed open source implementation of the UIMA specification (that specification is, in turn, being developed concurrently by a technical committee within OASIS , a standards organization). We invite and encourage you to participate in both the implementation and specification efforts.

Semantic MediaWiki (SMW) is an extension of MediaWiki – the wiki-system powering Wikipedia – with semantic technology, thus turning it into a semantic wiki. While articles in MediaWiki are just plain texts, SMW allows users to add structured data, comparable to the data one would usually store in a database. SMW uses the fact that such data is already contained in many articles: users just need to "mark" the according places so that the system can extract the relevant data without "understanding" the rest of the text. With this information, SMW can help to search, organise, browse, evaluate, and share the wiki's content.
This wiki (the one you're just using) is usually running on the most recent version of the Semantic MediaWiki extensions, and thus also serves as a demonstration for the system. Semantic MediaWiki is used on many other sites and has also been featured in the press.

10 November 2007 12:15

I found a great set of tools for natural language processing. The Java package includes a sentence detector, a tokenizer, a parts-of-speech (POS) tagger, and a treebank parser. It took me a little while to figure out where to start so I thought I'd post my findings here. I'm no linguist and I don't have previous experience with NLP, but hopefully this will help some one get setup with OpenNLP.

The DBpedia community uses a flexible and extensible framework to extract different kinds of structured information from Wikipedia.
The DBpedia information extraction framework is written using PHP 5. The framework is available from the DBpedia SVN (GNU GPL License).
This pages describes the DBpedia information extraction framework. The framework consists of the interfaces: Destination, Extractor, Page Collection and RDFnode, plus the essential classes Extraction Group, Extraction Job, Extraction Manager, Extraction Result and RDFtriple.

OpenNLP is an organizational center for open source projects related to natural language processing. Its primary role is to encourage and facilitate the collaboration of researchers and developers on such projects. Click here to see the current list of OpenNLP projects. We'll also try to keep a fairly up-to-date list of useful links related to NLP software in general.
OpenNLP also hosts a variety of java-based NLP tools which perform sentence detection, tokenization, pos-tagging, chunking and parsing, named-entity detection, and coreference using the OpenNLP Maxent machine learning package. To start using these tools download the latest release here, and check out the OpenNLP Tools API. For the latest news about these tools and to participate in discussions, check out OpenNLP's Sourceforge project page.

The Stanford NLP Group makes a number of pieces of NLP software available to the public. All these software distributions are licensed under the GNU Public License for non-commercial and research use. (Note that this is the full GPL, which allows its use for research purposes or other free software projects but does not allow its incorporation into any type of commercial software, even in part or in translation. Please contact us if you are interested in NLP software with commercial licenses.)
All the software we distribute is written in Java. Recent distributions require Sun JDK 1.5 (some of the older ones run on JDK 1.4). Distribution packages include components for command-line invocation, jar files, a Java API, and source code.