Abstract

This work presents a discussion about the relationship between the contributions of Alan Turing – the centenary of whose birth is celebrated in 2012 – to the field of artificial neural networks and modern unorganized machines: reservoir computing (RC) approaches and extreme learning machines (ELMs). Firstly, the authors review Turing’s connectionist proposals and also expose the fundamentals of the main RC paradigms – echo state networks and liquid state machines, - as well as of the design and training of ELMs. Throughout this exposition, the main points of contact between Turing’s ideas and these modern perspectives are outlined, being, then, duly summarized in the second and final part of the work. This paper is useful in offering a distinct appreciation of Turing’s pioneering contributions to the field of neural networks and also in indicating some perspectives for the future development of the field that may arise from the synergy between these views.

1. Introduction

Alan Mathison Turing (1912-1954) is widely regarded as one of the foremost mathematicians of the 20th century (Hawking, 2007), especially due to his proof of the impossibility of existence of a general solution to the Entscheidungs problem posed by David Hilbert (Turing, 1936). Albeit Alonzo Church, in a completely independent fashion, has reached the same conclusion using the formalism known as λ-calculus (Epstein & Carnielli, 2008) – which would play a role of its own in theoretical computer science (Russell & Norvig, 2009), - Turing’s proof was striking in the sense that it embodied, in an abstract (though almost tangible) way, the essence of the computing machines that were to occupy a preponderant role in the technological history of mankind from the 1940s up to present day. A result of this nature has the weight of a lifetime achievement, but, for Turing, it was only a sort of “first peak” in an intellectual trajectory that would also leave marks in other chapters of the history of the last century.

In September 1938, the same year in which he concluded his PhD under Church’s supervision, Turing reported to Government Code and Cypher School (GCCS) at Bletchley Park (Hodges, 1992). Hence began a work on the decryption of the naval Enigma cipher that had a remarkable impact on the British war effort, which suffered immensely from the action of the German submarines. The concurrent problem of speech coding allowed Turing to have contact with the Bell Labs team, which apropos included Claude Shannon, who shared many of Turing’s views regarding what can be broadly termed “information science.” In a tour the force involving cryptography, early signal processing theory and electronics, Turing conceived a voice-encoding device called Delilah (“the biblical deceiver of men,” as explained in Hodges (1992)), the prototype of which was built in 1944. The system, however, never had the chance of playing the practical role for which it was intended.

In parallel with these developments, Turing cultivated an interest that was central enough to leave a definite mark in his post-war research corpus: the idea of building a thinking machine (Leavitt, 2006). This idea is organically related to many of his earlier intellectual pursuits,1 and also to certain philosophical convictions forged in the course of his personal history.

One particular effort was very influential in the development of the field of artificial intelligence (AI) (Russell & Norvig, 2009): the attempt to formalize the notion of machine intelligence using an imitation game that is now termed Turing test. The test is discussed in a work (Turing, 1950) that is an immense pleasure to read, not only for its unique style, but also for its cornucopia of brilliant insights into many aspects and key questions associated with the later development of AI.

Interestingly, Turing also made other important contributions to the field that, in spite of their relevance, are even presently, relatively little known. Among these contributions, we highlight, in this paper, Turing’s ideas on the design of neural networks – which, thanks to works like Teuscher (2001), Copeland (2004), and Copeland and Proudfoot (1999), are starting to receive a more fit appreciation. These ideas can be, from a historical standpoint, considered as a development of the logical approach to neural modeling introduced by Warren McCulloch and Walter Pitts (1943). However, as we will discuss later on, Turing’s connectionist perspective is also very rich in its early use of recurrence and, particularly, of unorganized architectures. These features establish a number of interesting parallels with some recent neural approaches – like reservoir computing (Lukosevicius & Jaeger, 2009) and extreme learning machines (Huang, Zhu, & Siew, 2006) - and also with a number of investigations concerning the connectivity pattern of the nervous system. These connections have not, to the best of our knowledge, hitherto been explored. It is our belief that they can be useful not only in historiographical terms, but also as indicative of promising research subjects.