Results

Topics

classification task

In Comparing Multi-label Classification with Reinforcement Learning for Summarisation of Time-series Data

We frame content selection as a simple classification task : given a set of time-series data, decide for each template whether it should be included in a summary or not.

Page 1, “Introduction”

Collective content selection (Barzilay and Lapata, 2004) is similar to our proposed method in that it is a classification task that predicts the templates from the same instance simultaneously.

Page 2, “Related Work”

Problem transformation approaches (Tsoumakas and Katakis, 2007) transform the ML classification task into one or more simple classification tasks .

Page 3, “Related Work”

The LP method transforms the ML task, into one single-label multi-class classification task , where the possible set of predicted variables for the transformed class is the powerset of labels present in the original dataset.

F-score

Appears in 4 sentences as: F-score (4)

In Comparing Multi-label Classification with Reinforcement Learning for Summarisation of Time-series Data

We show that this method generates output closer to the feedback that lecturers actually generated, achieving 3.5% higher accuracy and 15% higher F-score than multiple simple classifiers that keep a history of selected templates.

Page 1, “Abstract”

The accuracy, the weighted precision, the weighted recall, and the weighted F-score of the classifiers are shown in Table 3.

Page 6, “Evaluation”

It was found that in 10-fold cross validation RAkEL performs significantly better in all these automatic measures (accuracy = 76.95%, F-score = 85.50%).

Page 6, “Evaluation”

Remarkably, ML achieves more than 10% higher F-score than the other methods (Table 3).