Syllogism Challenge 2017

The fields of action planning and automatic theorem proving in artificial intelligence (AI) have greatly benefitted from well defined benchmark problems and annual competitions. This made a fair comparison between different approaches and systems possible and triggered a competitive spirit to improve the state-of-the-art of the fields and to incorporate new concepts.

We see the necessity to have competitions in the field of human reasoning as well, as the number of cognitive theories that argue to explain parts of human reasoning is continuously increasing (for syllogistic reasoning alone there are at least twelve cognitive theories, Khemlani & Johnson-Laird, 2012), but few comparisons on common data sets exist.

In contrast to AI competitions, where often an optimal solution for a problem needs to be found, cognitive modeling aims at explaining the underlying cognitive processes by approximating the answer distribution generated by the participants. Not only does this require a computational model but a cognitive computational model, i.e., a computational model from which the underlying cognitive processes can be inferred. Multinomial processing trees (for an introduction, see Singmann & Kellen, 2013) are an excellent tool for representing such cognitive processes.

Models can differ on the quality of predictions:

Which answers are predicted by the theory? Which not?

Is there a qualitative order between the different answers?

Is there a quantitative prediction (e.g., 77% of the participants decide for answer A, 18% for B, 5% for C)?

To evaluate different cognitive models several accepted methods from mathematical psychology and artificial intelligence exist. We will evaluate the goodness-of-fit of algorithmic and multinomial process tree models (independently) on undisclosed behavioral data.:

Participation

To participate please send as a zip-file no later than September 1st 2017 to ragni@cs.uni-freiburg.de (with the Subject: Human Syllogistic Reasoning Challenge).

For participation in the algorithmic part:

Source code (preferably in Python, R, Prolog or Java) and a makefile to execute your code in a command line. It receives an input in form of: the classical abbreviations (e.g., AA1 or IA2, cp. Khemlani & Johnson-Laird, 2012) and the number of participants to fit and the output, for each input, is the predicted answer distribution by your program. We will set a time limit of 10 minutes for generating the answer distribution.

Your models' quantitative answer predictions in a csv or excel-file (the output file generated by the program).

For participation in the MPT part:

An MPT reflecting the algorithmic cognitive process steps in its nodes and precisely specifying the number of parameters. Please provide the tree in a MPTinR readable form.

Participation is possible in one or both parts and open to everyone. The three best models will be presented at this years KI in Dortmund.

Supplemented Material

Archive containing input and output datafile as well as a general Readme

Results

The winner of the 2017 syllogism challenge is the model by Antonis Kakas:

"The entry is based on the fomalization of human reasoning via argumentation. It rests on the idea of Argumentation Logic for uniformly capturing both formal classical logic and informal logic of common sense reasoning."

Model rankings based on the RMSE between the models' outputs and our experimental test data:

Model

RMSE*

Kakas 1

0.067

Kakas 2

0.074

Khemlani

0.145

Stolzenburg

0.161

Mörbitz

0.166

Khemlani-2012**

0.061

* With respect to the Rg-2016 experimental dataset** Khemlani-2012 are the experimental data we provided