Download Policy: Content on the Website is provided to you AS IS for your information and personal use and may not be sold / licensed / shared on other websites without getting consent from its author.While downloading, if for some reason you are not able to download a presentation, the publisher may have deleted the file from their server.

The one mention in the Meditations of innate ideas appears in Meditation Three

Our idea of God is innate

Its innateness is supposed to prove God’s existence

Ideas caused by ourselves, by external things or by God

But the idea of God, because it is a perfect idea, must be caused by a perfect being independent of external things

“… in order for a given idea to contain such and such objective reality, it must surely derive it from some cause which contains at least as much formal reality as there is objective reality in the idea” (Med III, AT VII 41)

Roughly, something’s formal reality is what it is, and something’s objective reality is what it is about

In a 1643 letter to the theologian Voetius, Descartes embraces Socrates’ argument in Meno directly:

“[W]e come to know them [innate ideas] by the power of our own native intelligence, without any sensory experience. All geometrical truths are of this sort — not just the most obvious ones, but all the others, however abstruse they may appear. Hence, according to Plato, Socrates asks a slave boy about the elements of geometry and thereby makes the boy able to dig out certain truths from his own mind which he had not previously recognized were there, thus attempting to establish the doctrine of reminiscence. Our knowledge of God is of this sort.”

“… [I]f there were machines which had the organs and the external shape of a monkey or of some other animal without reason, we would have no way of recognizing that they were not exactly the same nature as the animals; whereas, if there was a machine shaped like our bodies which imitated our actions as much as is morally possible, we would always have two very certain ways of recognizing that they were not, for all their resemblance, true human beings.”

Clearly, what Descartes means is not that just any machine “which had the organs and the external shape of a monkey or of some other animal without reason” would be indistinguishable (for the behavior would need to be the same), but only that there would in principle be such machines which would be indistinguishable.

Descartes presents two means to distinguish a real human from any human-like machine:

(1) Possession of language. No machine “should produce different arrangements of words so as to give an appropriately meaningful answer to whatever is said in its presence, as the dullest of men can do”

Chomsky links this first means with language’s “creative aspect”

(2) Diversity of action. While machines do some things well, some better than humans, they fail in other things because they act through the “disposition of their organs,” while humans do everything moderately well, acting through the “universal instrument of reason”

Descartes allows that there might be machines that use words, and in fact use them in human-like circumstances:

“[O]ne can easily imagine a machine made in such a way that it expresses words, even that it expresses some words relevant to some physical actions which bring about some change in its organs (for example, if one touches it in some spot, the machine asks what it is that one wants to say to it; if in another spot, it cries that one has hurt it, and things like that).”

His point is that this is not enough – such a machine quickly reaches a limit on how well it can imitate.

“For it is really remarkable that there are no men so dull and stupid, including even idiots, who are not capable of putting together different words and of creating out of them a conversation through which they make their thoughts known….”

“[B]y contrast, there is no other animal, no matter how perfect and how successful it might be, which can do anything like that.”

“magpies and parrots can emit words, as we can, but nonetheless cannot talk like us, … giving evidence that they are thinking about what they are uttering”

There are humans who lack the organ but have the language ability

“men who are born deaf and dumb are deprived of organs which other people use to speak—just as much as or more than the animals—but they have a habit of inventing on their own some signs by which they can make themselves understood to those who, being usually with them, have the spare time to learn their language”

“[W]e see that it takes very little for someone to learn how to speak, and since we observe inequality among the animals of the same species just as much as among human beings, and see that some are easier to train than others, it would be incredible that a monkey or a parrot which is the most perfect of his species was not equivalent in speaking to the most stupid child or at least a child with a troubled brain, unless their soul had a nature totally different from our own.”

“[O]ne should not confuse words with natural movements which attest to the passions and can be imitated by machines as well as by animals.”

“[N]or should one think, like some ancients, that animals talk, although we do not understand their language. For if that were true, because they have several organs related to our own, they could just as easily make themselves understood to us as to the animals like them.”

“[A]lthough there are several animals which display more industry in some of their actions than we do, we nonetheless see that they do not display that at all in many other actions”

“[T]he fact that they do better than we do does not prove that they have a mind, for, if that were the case, they would have more of it than any of us and would do better in all other things”

“[I]t rather shows that [beasts] have no reason at all, and that it's nature which has activated them according to the arrangement of their organs—just as one sees that a clock, which is composed only of wheels and springs, can keep track of the hours and measure time more accurately than we can”

It’s not clear, however, what exactly he means by his remark about the Gallup poll –

“If the meaning of the words ‘machine’ and ‘think’ are to be found by examining how they are commonly used it is difficult to escape the conclusion that the meaning and the answer to the question, ‘Can machines think?’ is to be sought in a statistical survey such as a Gallup poll.”

It’s hard to see how a statistical survey would reveal the meaning or the answer

Perhaps what Turing means is that if one is to use the terms as the public does then no other sort of answer is available besides that of a Gallup poll

But that seems wrong – after all, the public itself would not be satisfied with an answer of that sort

Turing’s question: What will happen when a machine takes the part of A in this game? Will the interrogator decide wrongly as often when the game is played like this as he does when the game is played between a man and a woman?

Susan Sterrett (in 2000 essay in Mind and Machines) points out that this ambiguity leads to two different Turing tests.

What she calls “The Original Imitation Game Test”: A machine passes the OIG Test if the interrogator decides wrongly as often when the game is played with the OIG (machine replacing woman) as he or she does when the game is played between a man and a woman

What she calls “The Standard Turing Test”: A machine passes the ST Test if the interrogator cannot decide which is the machine and which is the person

She maintains that these two tests are not interchangeable, that there are empirical differences

In the 1952 BBC interview, Turing seems to indicate that the interrogator’s task is to distinguish person from machine, not male from female:

“The idea of the test is that the machine has to pretend to be a man, by answering questions put to it, and it will only pass if the pretense is reasonably convincing…. We had better suppose that each jury has to judge quite a number of times, and that sometimes they really are dealin with a man and not a machine. That will prevent them saying ‘It must be a machine’ every time without proper consideration.” (The Turing Test, p. 118.)

Turing: “I believe that in about fifty years’ time it will be possible to programme computers, with a storage capacity of about 109 [bits], to make them play the imitation game so well that an average interrogator will not have more than 70% chance of making the right identification after five minutes of questioning.”

Moore’s Law: That the number of transistors on integrated circuits doubles every two years

Moore’s Law as Gordon Moore initially set it out in 1965:

“The complexity for minimum component costs has increased at a rate of roughly a factor of two per year. Certainly over the short term this rate can be expected to continue, if not to increase. Over the longer term, the rate of increase is a bit more uncertain, although there is no reason to believe it will not remain nearly constant for at least 10 years. That means by 1975, the number of components per integrated circuit for minimum cost will be 65,000. I believe that such a large circuit can be built on a single wafer.” (Gordon Moore, “Cramming More Components onto Integrated Circuits,” Electronics Magazine, April 19, 1965)

Turing: “The original question ‘Can machines think?’ I believe to be too meaningless to deserve discussion. Nevertheless I believe that at the end of the century the use of words and general educated opinion will have altered so much that one will be able to speak of machines thinking without expecting to be contradicted.”

Turing apparently means that the specialist’s use of the terms will be used at this time

It is hard to see why it matters that “the use of words and general educated opinion” changing matters

What would be relevant is whether what such people mean is true, not what they would say

“What would Professor Jefferson say if the sonnet-writing machine was able to answer like this in the viva voce? I do not know whether he would regard the machine as ‘merely artificially signalling’ these answers, but if the answers were as satisfactory and sustained as in the above passage I do not think he would describe it as ‘an easy contrivance.’”

“It is claimed that the interrogator could distinguish the machine from the man simply by setting them a number of problems in arithmetic. The machine would be unmasked because of its deadly accuracy. The reply to this is simple. The machine (programmed for playing the game) would not attempt to give the right answers to the arithmetic problems. It would deliberately introduce mistakes in a manner calculated to confuse the interrogator.”

“A variant of Lady Lovelace's objection states that a machine can ‘never do anything really new.’ This may be parried for a moment with the saw, ‘There is nothing new under the sun.’ Who can be certain that ‘original work’ that he has done was not simply the growth of the seed planted in him by teaching, or the effect of following well-known general principles. A better variant of the objection says that a machine can never ‘take us by surprise.’ This statement is a more direct challenge and can be met directly. Machines take me by surprise with great frequency. This is largely because I do not do sufficient calculation to decide what to expect them to do, or rather because, although I do a calculation, I do it in a hurried, slipshod fashion, taking risks.”

“It is not possible to produce a set of rules purporting to describe what a man should do in every conceivable set of circumstances. One might for instance have a rule that one is to stop when one sees a red traffic light, and to go if one sees a green one, but what if by some fault both appear together? One may perhaps decide that it is safest to stop. But some further difficulty may well arise from this decision later. To attempt to provide rules of conduct to cover every eventuality, even those arising from traffic lights, appears to be impossible. With all this I agree.

“From this it is argued that we cannot be machines. I shall try to reproduce the argument, but I fear I shall hardly do it justice. It seems to run something like this: ‘if each man had a definite set of rules of conduct by which he regulated his life he would be no better than a machine. But there are no such rules, so men cannot be machines.’ … There may however be a certain confusion between ‘rules of conduct’ and ‘laws of behaviour’ to cloud the issue.”

Searle distinguishes what he calls “weak AI” from what he calls “strong AI.”

“According to weak AI,” Searle writes, “the principal value of the computer in the study of the mind is that it gives us a very powerful tool. For example, it enables us to formulate and test hypotheses in a more rigorous and precise fashion.”

“But according to strong AI,” he writes, “the computer is not merely a tool in the study of the mind; rather, the appropriately programmed computer really is a mind, in the sense that computers given the right programs can be literally said to understand and have other cognitive states.”

“I have no objection to the claims of weak AI, at least as far as this article is concerned,” Searle writes. “My discussion here will be directed at the claims I have defined as those of strong AI, specifically the claim that the appropriately programmed computer literally has cognitive states and that the programs thereby explain human cognition.”

Searle’s summary of the work of Roger Schank: “Very briefly, and leaving out the various details, one can describe Schank’s program as follows: The aim of the program is to simulate the human ability to understand stories. It is characteristic of human beings’ story-understanding capacity that they can answer questions about the story even though the information that they give was never explicitly stated in the story. Thus, for example, suppose you are given the following story: ‘A man went into a restaurant and ordered a hamburger. When the hamburger arrived it was burned to a crisp, and the man stormed out of the restaurant angrily, without paying for the hamburger or leaving a tip.’ Now, if you are asked ‘Did the man eat the hamburger?’ you will presumably answer, ‘No, he did not.’ Similarly, if you are given the following story: ‘A man went into a restaurant and ordered a hamburger; when the hamburger came he was very pleased with it; and as he left the restaurant he gave the waitress a large tip before paying his bill,’ and you are asked the question, ‘Did the man eat the hamburger?’ you will presumably answer, ‘Yes, he ate the hamburger.’ Now Schank’s machines call similarly answer, questions about restaurants in this fashion. To do this, they have a ‘representation’ of the sort of information that human beings have about restaurants, which enables them to answer such questions as those above, given these sorts of stories. When the machine is given the story and then asked the question, the machine will print out answers of the sort that we would expect human beings to give if told similar stories.”

“One way to test any theory of the mind,” Searle writes, “is to ask oneself what it would be like if my mind actually worked on the principles that the theory says all minds work on. Let us apply this test to the Schank program with the following Gedankenexperiment. Suppose that I’m locked in a room and given a large batch of Chinese writing. Suppose furthermore (as is indeed the case) that I know no Chinese, either written or spoken, and that I’m not even confident that I could recognize Chinese writing as Chinese writing distinct from, say, Japanese writing or meaningless squiggles. To me, Chinese writing is just so many meaningless squiggles. Now suppose further that after this first batch of Chinese writing I am given a second batch of Chinese script together with a set of rules for correlating the second batch with the first batch. The rules are in English, and I understand these rules as well as any other native speaker of English. They enable me to correlate one set of formal symbols with another set of formal symbols, and all that "formal" means here is that I can identify the symbols entirely by their shapes. Now suppose also that I am given a third batch of Chinese symbols together with some instructions, again in English, that enable me to correlate elements of this third batch with the first two batches, and these rules instruct me how to give back certain Chinese symbols with certain sorts of shapes in response to certain sorts of shapes given me in the third batch. Unknown to me, the people who are giving me all of these symbols call the first batch a ‘script,’ they call the second batch a ‘story,’ and they call the third batch ‘questions.’ Furthermore, they call the symbols I give them back in response to the third batch ‘answers to the questions,’ and the set of rules in English that they gave me, they call the ‘program.’”

Now just to complicate the story a little, imagine that these people also give me stories in English, which I understand, and they then ask me questions in English about these stories, and I give them back answers in English. Suppose also that after a while I get so good at following the instructions for manipulating the Chinese symbols and the programmers get so good at writing the programs that from the external point of view—that is, from tile point of view of somebody outside the room in which I am locked—my answers to the questions are absolutely indistinguishable from those of native Chinese speakers. Nobody just looking at my answers can tell that I don’t speak a word of Chinese. Let us also suppose that my answers to the English questions are, as they no doubt would be, indistinguishable from those of other native English speakers, for the simple reason that I am a native English speaker. From the external point of view—from the point of view of someone reading my "answers"—the answers to the Chinese questions and the English questions are equally good. But in the Chinese case, unlike the English case, I produce the answers by manipulating uninterpreted formal symbols. As far as the Chinese is concerned, I simply behave like a computer; I perform computational operations on formally specified elements. For the purposes of the Chinese, I am simply an instantiation of the computer program.

“Now the claims made by strong AI are that the programmed computer understands the stories and that the program in some sense explains human understanding,” Searle writes. “But we are now in a position to examine these claims in light of our thought experiment.

“1. As regards the first claim, it seems to me quite obvious in the example that I do not understand a word of the Chinese stories. I have inputs and outputs that are indistinguishable from those of the native Chinese speaker, and I can have any formal program you like, but I still understand nothing. For the same reasons, Schank’s computer understands nothing of any stories, whether in Chinese, English, or whatever, since in the Chinese case the computer is me, and in cases where the computer is not me, the computer has nothing more than I have in the case where I understand nothing.

“2. As regards the second claim, that the program explains human understanding, we can see that the computer and its program do not provide sufficient conditions of understanding since the computer and the program are functioning, and there is no understanding. But does it even provide a necessary condition or a significant contribution to understanding? One of the claims made by the supporters of strong AI is that when I understand a story in English, what I am doing is exactly the same—or perhaps more of the same—as what I was doing in manipulating the Chinese symbols. It is simply more formal symbol manipulation that distinguishes the case in English, where I do understand, from the case in Chinese, where I don’t. I have not demonstrated that this claim is false, but it would certainly appear an incredible claim in the example.”