License

The CROHME 2013 Train set merges several existing data sets which keep their original copyrights:

expressmatch: University of Sao Paulo

MathBrush: University of Waterloo

KAIST: KAIST lab

MfrDB: CzechTechnical University

HAMEX: University of Nantes

Current Version

2.0

Keywords

Online Handwritting, Mathematical Expression Recognition

Description

Sample image from the CROHME dataset.

The dataset provides more than 10,000 expressions handwritten by hundreds of writers from different countries, merging the data sets from 3 CROHME competitions. Writers were asked to copy printed expressions from a corpus of expressions. The corpus has been designed to cover the diversity proposed by the different tasks and choosen from existing Math corpora and from expressions embedded in wikipedia pages. Different devices have been used (different digital pen technologies, white-board input device, tablet with sensible screen) so different scales and resolutions are used. The dataset provides only the on-line signal.

In the last competition CROHME 2013 the test part is completely original and the train part is using 5 existing data sets:

MathBrush (University of Waterloo),

HAMEX (University of Nantes),

MfrDB (Czech Technical University),

ExpressMatch (University of Sao Paulo),

the KAIST data set.

Furthermore, 6 participants of the 2012 competition provide their recognized expressions for the 2012 test part. This data allows research on decision fusion or evaluation metrics.

Metadata and Ground Truth Data

The CROHME dataset includes the segmentation, the label and the layout of each mathematical expresion using INKML and MATHML standarts.

The ink corresponding to each expression is stored in an InkML file. An InkML file mainly contains three kinds of information:

the ink: a set of traces made of points;

the symbol level ground truth: the segmentation and label information of each symbol of the expression;

the expression level ground truth: the MathML structure of the expression.

The two levels of ground truth information (at the symbol as well as at the expression level) are entered manually. Furthermore, some general information is added in the file:

the LaTeX ground truth (without any reference to the ink and hence, easy to render);

the unique identification code of the ink (UI), etc.

The InkML format makes references between the digital ink of the expression, its segmentation into symbols and its MathML representation. Thus, the stroke segmentation of a symbol can be linked to its MathML representation.

The recognized expressions are the outputs of the recognition competitors' systems. It uses the same InkML format, but without the ink information (only segmentation, label and MathML structure).

The total size of the dataset is ~130Mb.

Related Tasks

Math Expression Recognition

Purpose: The difficulty to recognize math expression depends of the number of different symbols, number of allowed layouts and the used grammar. The competition defines 4 levels (tasks) from 41 symbols to 101 symbols, with increasing difficulties in the grammar of allowed expressions.

Evaluation Protocol: The competition defines the following evaluation protocol:

participants can use available training dataset (and more)

the candidate systems take as input an inkml file (without ground-truth) and have to write as output a inkml file with the symbol segmentation, recognition and the expression interpretation with MathML format. This is exactly the same format as the provided training dataset. In 2013, the system can also generate label graph (LG) files.

the evaluation first converts the inkml files in LG files and then compare the resulting inkml and LG files with the ground-truth with a provided script.

Several aspects are measured. These are:

ST_Rec: the stroke classification rate, representing the percentage of strokes with the correct symbol,