Revision as of 06:46, 11 March 2009

Contents

Introduction

The project uses the Italian Wikipedia as source of documents for several purposes: as training data and as source of data to be annotated.

The Wikipedia maintainers provide, each month, an XML dump of all documents in the database: it consists of a single XML file containing the whole encyclopedia, that can be used for various kinds of analysis, such as statistics, service lists, etc.

In order to perform text analysis it is necessary to extract plain text from the documents by removing syntactical decorations (bolds, italics, underlines, etc.).

The aim of the Wikipedia extractor tool is to generate plain text from Wikipedia database, discarding any other information or annotation present in Wikipedia pages, such as images, tables, references and lists.

Each document in the dump of the encyclopedia is representend as a single XML element, encoded as illustrated in the following example from the document titled Armonium:

The extraction tool has been implemented in Python and it aims to achieve high accuracy in extraction task.

The standard page format adopted by Wikipedia makes use the wiki syntax, which is a simple and intuitive formalism for specifying meta-information associated to texts (bolds, italics, underlines, images, tables, etc.). Unfortunately this standard is not in use by every author, and some of them prefer to insert HTML markup inside the documents. Wiki and HTML tags are often misused in the text (not closed tags, wrong attributes, etc.). Therefore the extractor deploys several heuristics for maximizing the success probability. The main direction for future works is the improvements of the accuracy of the heuristic used.

Description

wiki-extractor.py is a Python script that extracts and cleans text from a Wikipedia database dump. The output is stored in a number of files of similar size in a given directory.
Each file contains several documents in the document format.