Shared Task: Machine Translation

June 7 - 8, 2012
Montreal, Quebec, Canada

The recurring translation task of the WMT workshops focuses on
European language pairs. Translation quality will be evaluated on a
shared, unseen test set of news stories. We provide a parallel corpus as
training data, a baseline system, and additional resources
for download. Participants may augment the
baseline system or use their own system.

GOALS

The goals of the shared translation task are:

To investigate the applicability of current MT techniques when translating into languages other than English

To examine special challenges in translating between European languages, including word order differences and morphology

To create publicly available corpora for machine translation and machine translation evaluation

To generate up-to-date performance numbers for European languages in order to provide a basis of comparison in future research

We hope that both beginners and established research groups will participate in this task.

TASK DESCRIPTION

We provide training data for four European language pairs, and a common
framework (including a baseline system). The task is to improve methods
current methods. This can be done in many ways. For instance participants
could try to:

improve word alignment quality, phrase extraction, phrase scoring

add new components to the open source software of the baseline system

augment the system otherwise (e.g. by preprocessing, reranking, etc.)

build an entirely new translation systems

Participants will use their systems to translate a test set of unseen
sentences in the source language. The translation quality is measured by
a manual evaluation and various automatic evaluation metrics.
Participants agree to contribute to the manual evaluation about eight
hours of work.

You may participate in any or all of the following language pairs:

French-English

Spanish-English

German-English

Czech-English

For all language pairs we will test translation in both directions. To
have a common framework that allows for comparable results, and also to
lower the barrier to entry, we provide a common training set and baseline
system.

We also strongly encourage your participation, if you use your own
training corpus, your own sentence alignment, your own language model, or
your own decoder.

If you use additional training data or existing translation systems, you
must flag that your system uses additional data. We will distinguish
system submissions that used the provided training data (constrained)
from submissions that used significant additional data resources. Note
that basic linguistic tools such as taggers, parsers, or morphological
analyzers are allowed in the constrained condition.

Your submission report should highlight in which ways your own methods
and data differ from the standard task. We may break down submitted
results in different tracks, based on what resources were used. We are
mostly interested in submission that are constraint to the provided
training data, so that the comparison is focused on the methods, not on
the data used. You may submit contrastive runs to demonstrate the benefit
of additional training data.

The provided data is mainly taken from version 7 of
the Europarl corpus, which is freely available.
Please click on the links below to download the sentence-aligned data, or
go to the Europarl website for the source release.

Additional training data is taken from the new News Commentary corpus.
There are about 50 million words of training data per language from the
Europarl corpus and 3 million words from the News Commentary corpus.

Europarl

French-English

Spanish-English

German-English

Czech-English

French monolingual

Spanish monolingual

German monolingual

Czech monolingual

English monolingual

News Commentary

French-English

Spanish-English

German-English

Czech-English

French monolingual

Spanish monolingual

German monolingual

Czech monolingual

English monolingual

News

French monolingual

Spanish monolingual

German monolingual

English monolingual

Czech monolingual

United Nations

French-English

Spanish-English

French-English 109 corpus

French-English

Crawled from Canadian and European Union sources.

CzEng

Czech-English

The current version of the CzEng corpus (version v1.0) is
available from
the CzEng web
site.

You may also use the following monolingual corpora released by the LDC:

Note that the released data is not tokenized and includes sentences of
any length (including empty sentences). All data is in Unicode (UTF-8)
format. The following tools allow the processing of the training data
into tokenized format:

To evaluate your system during development, we suggest using the
2011 test set. The data is provided in raw text format and in an
SGML format that suits the NIST scoring tool. We also release other
test sets from previous years.

Punctuation in the official test sets will be encoded with ASCII characters (not complex Unicode characters) as much as possible. You may want to normalize your system's output before submission. You are able able to use a rawer version of the test sets that does not have this normalization.

To submit your results, please first convert into into SGML format as
required by the NIST BLEU scorer, and then upload it to the
website matrix.statmt.org.

SGML Format

Each submitted file has to be in a format that is used by standard
scoring scripts such as NIST BLEU or TER.

This format is similar to the one used in the source test set files that
were released, except for:

First line is <tstset trglang="en" setid="newstest2012"
srclang="any">, with trglang set to
either en, de, fr, es,
or cz. Important: srclang is always any.

Each document tag also has to include the system name,
e.g. sysid="uedin".

CLosing tag (last line) is </tstset>

The script wrap-xml.perl makes the conversion
of a output file in one-segment-per-line format into the required SGML
file very easy:

Upload to Website

Go to Account -> upload/edit content, and follow the link "Submit a system run"

select as test set "newstest2012" and the language pair you are submitting

select "create new system"

click "continue"

on the next page, upload your file and add some description

If you are submitting contrastive runs, please submit your primary system
first and mark it clearly as the primary submission.

EVALUATION

Evaluation will be done both automatically as well as by human judgement.

Manual Scoring: We will collect subjective judgments about translation
quality from human annotators. If you participate in the shared task,
we ask you to commit about 8 hours of time to do the manual evaluation.
The evaluation will be done with an online tool.

As in previous years, we expect the translated submissions to be in
recased, detokenized, XML format, just as in most other translation
campaigns (NIST, TC-Star).

DATES

Release of training data

December 9, 2011

Test set distributed for translation task

February 27, 2012

Submission deadline for translation task

March 2, 2012

Paper due date

April 6, 2012

supported by the EuroMatrixPlus projectP7-IST-231720-STP funded by the European Commission under Framework Programme 7