Our JRS'2012 Data Mining Contest has finished! You can find the final results at the leaderboard and a summary of the competition in the Summary section.

Once again, thank you for participation!

To those of you how are waiting for publication of labels for the test data, as well as for PMIDs of documents and names of the MeSH headings/subheadings: please be patient. Although the JRS'2012 competition ended, there is a mini-contest ongoing for our students at University of Warsaw who attend a Machine Learning course. We do not want to disclose any additional data before it finishes, which is going to be on April 30.

We have revealed all the data related to our competition, including classification of the test cases, PMIDs of documents, as well as the names of columns and target labels. The files can be accessed from the Summary page.

namp wrote:Is it possible to make available the script that you used in order to calculate the score appearing on the leaderboard?

Hello,

I have placed eval.jar file in the 'Public files' folder. It contains a java script which was used for evaluation of results during the competition. You can access it through the Summary section.

Though, you have to remember that the preliminary and final evaluation scores were computed on disjoint subsets of the test data. We can not reveal this partitioning just yet.

Best regards,

Andrzej Janusz

Hello,

I was wondering if you could guide me in order to use eval1prelim.jar. I try to run it in command line,but it delivers errors. should we set the arguments for it in command line or it will provide a GUI?

To set up a contest we had to define datasets and an evaluation procedure.The shared file eval.jar constist of one java class FMeasureEvaluationProcedure which extends EvaluationProcedure abstract class from TunedIT framework. You need to inherit from this class if you want to implement an evaluation procedure suitablefor TunedIT based contest.

If you want to use the eval.jar you have a couple of options. If you do not knowthe TunedIT research possibilities, you will find the second one easier for start.1. You can use TunedTester to set up a test and use FMeasureEvaluationProcedure as yourevaluation procedure. Read http://wiki.tunedit.org/ - especially TunedIT Research section.2. The evaluation procedure was not designed to run by hand, but if you makea little effort you can achieve what you want. Of course you may freely extend this example, e.g. to pass paths as args from command line.Please remember to add eval.jar and core.jar (you can read about and download core.jar from http://wiki.tunedit.org/doc:research-architecture)An example may be more or less as follows.

To set up a contest we had to define datasets and an evaluation procedure.The shared file eval.jar constist of one java class FMeasureEvaluationProcedure which extends EvaluationProcedure abstract class from TunedIT framework. You need to inherit from this class if you want to implement an evaluation procedure suitablefor TunedIT based contest.

If you want to use the eval.jar you have a couple of options. If you do not knowthe TunedIT research possibilities, you will find the second one easier for start.1. You can use TunedTester to set up a test and use FMeasureEvaluationProcedure as yourevaluation procedure. Read http://wiki.tunedit.org/ - especially TunedIT Research section.2. The evaluation procedure was not designed to run by hand, but if you makea little effort you can achieve what you want. Of course you may freely extend this example, e.g. to pass paths as args from command line.Please remember to add eval.jar and core.jar (you can read about and download core.jar from http://wiki.tunedit.org/doc:research-architecture)An example may be more or less as follows.

Thank you very much for your comprehensive response.Finally, I coud use tunedtester to evaluate my algorithm's answer,but there is still one question.

As you know, there are two distinct evaluations for traffic prediction contest,preliminary and final. Preliminary samples comprise 35% of test data and final samples - the remaining 65%.

The question is that; are these subsets selected randomly from the whole answer file OR the first 35% of data is evaluated for preliminary and the remaining for final?if the subsets are fixed part of answer file, why we get different answers through different runs of TunedTester?

More generally, why there is an option to run Tunedtester for several times?