ICGA Journal - Volume 33, issue 3

Purchase individual online access for 1 year to this journal.

Price: EUR 50.00

ISSN 1389-6911 (P)

ISSN
2468-2438 (E)

Impact Factor 2019: 0.500

The ICGA Journal provides an international forum for computer games researchers presenting new results on ongoing work. The editors invite contributors to submit papers on all aspects of research related to computers and games. Relevant topics include, but are not limited to:

(1) the current state of game-playing programs for classic and modern board and card games(2) the current state of virtual, casual and video games(3) new theoretical developments in game-related research, and (4) general scientific contributions produced by the study of games.

Also welcome is research on topics such as:
(5) social aspects of computer games
(6) cognitive research of how humans play games
(7) capture and analysis of game data, and
(8) issues related to networked games are invited to submit their contributions.

Abstract: The debate in philosophy and cognitive science about the Chinese Room Argument has focused on whether it is clear that machines can have minds. We present a quantitative argument which shows that Searle’s thought experiment is not relevant to Turing’s Test for intelligence. Instead, we consider a narrower form of Turing’s Test, one that is restricted to the playing of a chess endgame, in which the equivalent of Searle’s argument does apply. An analysis of time/space trade-offs in the playing of chess endgames shows that Michie’s concept of Human Window offers a hint of what a machine’s mental representations might…need to be like to be considered equivalent to human cognition.
Show more

Abstract: In this contribution, I attempt to improve upon my existing computational model for recognizing beauty in mate-in-3 combinations in the game of international (or Western) chess. The intention is to obtain some insight into the way the existing model may be applicable outside the current scope, e.g., to single moves and endgame studies. The full article consists of two parts. The first part contains two phases of experimentation which compare combinations taken from the domain of compositions and from real games. In both phases I use a yardstick of human-player aesthetic ratings. In this part, we report three results. First,…it was discovered that only having a high positive correlation with the human rating does not necessarily mean that (this variation of) the model is viable. Second, variations of the existing model – in terms of the aesthetic features examined and the weights attributed to them – are demonstrably either worse or, in the minority of cases examined, at best equivalent in performance to it. So, my original model may, at this moment, be adequate. Third, experimental results lead to questions on the effectiveness of using different weights (even those provided by domain experts) with respect to aesthetic features for the purpose of discriminating between them in terms of inherent ‘importance’. In practice, any discriminating procedure was found to be unreliable and therefore it offered no improvement over the default intelligently designed feature evaluation functions that, in principle, do not value some features over others.
Show more