A dynamical systems perspective on the relationship between symbolic and non-symbolic computation

Abstract

It has been claimed that connectionist (artificial neural network) models of language processing, which do not appear to employ “rules”, are doing something different in kind from classical symbol processing models, which treat “rules” as atoms (e.g., McClelland and Patterson in Trends Cogn Sci 6(11):465–472, 2002). This claim is hard to assess in the absence of careful, formal comparisons between the two approaches. This paper formally investigates the symbol-processing properties of simple dynamical systems called affine dynamical automata, which are close relatives of several recurrent connectionist models of language processing (e.g., Elman in Cogn Sci 14:179–211, 1990). In line with related work (Moore in Theor Comput Sci 201:99–136, 1998; Siegelmann in Neural networks and analog computation: beyond the Turing limit. Birkhäuser, Boston, 1999), the analysis shows that affine dynamical automata exhibit a range of symbol processing behaviors, some of which can be mirrored by various Turing machine devices, and others of which cannot be. On the assumption that the Turing machine framework is a good way to formalize the “computation” part of our understanding of classical symbol processing, this finding supports the view that there is a fundamental “incompatibility” between connectionist and classical models (see Fodor and Pylyshyn 1988; Smolensky in Behav Brain Sci 11(1):1–74, 1988; beim Graben in Mind Matter 2(2):29--51,2004b). Given the empirical successes of connectionist models, the more general, super-Turing framework is a preferable vantage point from which to consider cognitive phenomena. This vantage may give us insight into ill-formed as well as well-formed language behavior and shed light on important structural properties of learning processes.

Rodriguez P (1995) Representing the structure of a simple context-free language in a recurrent neural network: A dynamical systems approach. On-line Newsletter of the Center for Research on Language, University of California, San Diego