Intelligence is a topic that has been discussed on this site recently. Defining and describing intelligence can be a
difficult thing, there are no shortage of definitions. There are obviously levels of what we may call intelligence
going all the way from bacterial behavior to consciousness and self awareness. I will write a series of articles on intelligence, starting at the level where organisms can learn,
and call it basic intelligence. Asking whether simpler organisms or evolving systems of such organisms have basic intelligence is a valid point, however I won't go into that here.

I believe a lot can be learnt from studying and considering how neurons behave and adapt and will draw the readers attention to some surprising and very important results in this regard.
This kind of approach makes sense as you can then see how intelligence developed without having to approach everything at once.
There are simple organisms with basic intelligence that it makes sense to study without having to solve all the problems associated with consciousness, intent, attention etc.
So in this article, I will talk about and attempt to define what I call basic intelligence. That is a level of intelligence that is useful, but without all the complication of
things like the self. In later articles I will discuss higher levels of intelligence with concepts introduced here playing a central part.

Defining something can be difficult as "intelligence" is just a word, meaning something slightly different to each person. However in order to talk about basic intelligence it must be
defined in such a way that it satisfies many peoples understanding of what it is, while still being consistent. Rather than launch straight into a definition, I will leave that until near
the end, after the motivation for such a definition will be clear.

Firstly I will point out what I regard to be essential qualities of basic intelligence then some surprising examples to illustrate what is going on.

Qualities of basic intelligence

So what could be considered qualities for basic intelligence?

1. A system capable of performing calculations such as with neurons or a computer. We know enough about brains and intelligence to know that the ability to calculate is essential for a simple brain.
2. The ability to adapt to new situations with meaningful behavior.
3. Some kind of pattern recognition skill.
4. Nothing to do with self awareness etc.

The amazing abilities of neurons

Please watch the video in the following article that demonstrates such basic intelligence, and pay attention to the behavior of the neurons, not the engineers.

Briefly what this shows is neurons automatically adapting to a completely different environment to what they would normally have been exposed to. They are rat neurons, but they are in a jar,
nowhere near a rats brain, and processing information that is input in a way nothing like what would happen in an actual rats brain. In spite of that they learn and adapt their behavior
to do something useful. It is not exactly clear from the article what was needed to make this learning start, random electrical stimulation seems to have been all that it took. What was not
needed was some high level instructions to tell the brain to avoid walls. There was nothing to tell the neurons what the signals meant, but they were still able to extract information
from what was input, and generate meaningful behavior.

What you should realise is just how flexible and adaptable neurons are. We cannot yet get a
computer to display such general learning behavior.

An even more striking example of such flexibility involving rewiring sensory input is described here
and here.
"All reasonable doubt that the senses can be rewired was recently put to rest in one of the most amazing plasticity experiments of our time"

In the experiment discussed, the ferrets brain was rewired so that the visual sensory input from the optic nerve was hooked up to the brain regions where hearing is normally processed. In spite of this, that brain
region automatically configured itself to make sense of the visual data, and even allowed the ferret to see with 20/60 vision (33% of the normal 20/20). The structures that normally are
found in the visual region of the ferrets brain automatically formed in the section normally used for processing sound, and did so well enough that the ferret was able to see.
How was this achieved?

The design of the experiment allows you to rule out many possibilities. Genes in the brain, or specific brain chemistry can be completely ruled out because the auditory section would have
the wrong genes expressed and brain chemistry. Some kind of top down instruction can also be ruled out because the ferrets brain doesn't "know" that such rewiring has occurred.
The only possibility is that the neurons were able to automatically recognise the pattern, and adjust their internal structure to match it and extract the relevant information.
It also should be noted that unlike in the first rat brain experiment, at the beginning of the learning process there was no feedback whatsoever given to the neurons as to whether they were
doing the "right" thing. There was no training run where they were told the right answer, all that was required was the pattern in the electrical signals coming in the optical nerve
for them to latch onto and "figure out".

As of today, psychologists, computer scientists, mathemeticians and engineers do not know how neurons perform this feat. However what we do know is that the neurons did not use rational thought or
mathematics as we know it to achieve this. There is not contained in each neuron all the mathematical formulae needed to figure out vision such as edge detection etc. For all we know
such abilities may be mathematically intractable. The structures
required for vision grew in a bottom up, self organizing manner.

A general algorithm for basic intelligence?

As was mentioned in the articles, there was a debate in the psychology literature up until about late last century as to whether brain regions were specialized, requiring specific genes, brain chemistry
and needing to develop in the correct place, or whether the brain adapted itself to the input. Obviously genes and chemistry have a use and have some effect as the ferret in question had 20/60 not 20/20 vision but as far as the
debate was concerned these results argued overwhelmingly for the brain being very flexible and adaptable.

Now this flexibility suggests that there is a general algorithm for basic intelligence that is shared by all neural tissue in sufficiently advanced animals. If that is indeed the case, once it
is figured out how neurons form connections then if attempts to put this structure and function in another substrate such as silicon are successful then AI would make significant and rapid
progress. Such a system would in the order of weeks or months be able to do make more progress in some AI problems than computer scientists have been able to achieve in decades.

A definition for basic intelligence

Based on the observations above, we can now give the following definition for basic intelligence that describes what it is from a signal processing and systems perspective:

"A system that automatically adapts to a signal it has not been trained to, and extracts useful information from it
demonstrates basic intelligence. The more varied and complicated the signals it can adapt to, the more intelligence it demonstrates."

In the case of the ferret you can easily tell that useful information has been extracted because it could see, however there is another important way. The internal structure of the
auditory section of the ferrets brain grew to match that of the visual section. The fact that this happened automatically demonstrated basic intelligence and showed that such information
had been extracted. Now there will be some subjectivity about what is a more complicated signal and which classes of signals are more important to others, but obviously a system that just detected edges
from visual data would be less intelligent by this definition than one that did that, and built up a 3-d model of the world as a result.

In order for this definition to be useful higher intelligence also should not be defined in a radically different way to this. It is reasonable to expect higher intelligence
to be built out of a system that can demonstrate basic intelligence in such a manner. A beating heart is made up of heart cells that automatically beat in time, and it is much
easier to build a strong structure out of strong basic materials than weak ones, therefore I expect higher intelligences to be constructed out of systems that can by themselves demonstrate basic intelligence as defined here.

How may this have evolved?

Here I offer ideas on how and why such an ability evolved. To start with, consider the roundworm C. elegans, which is well studied and has about 300 neurons. These neurons cannot
adapt to input in the way a ferrets brain does and the connections are pretty much determined by its genes. The question is what caused the neuronal connections to be no longer explicitly determined by genetics?

This was a very important transition because it allowed the beginning of what I have defined to be basic intelligence, or what you may also call learning. The most obvious answer is
that it happened for the same reason that the position of each individual air sac in your lungs is not determined by genetics. Even if it was physically possible, there simply
is not enough room on the genome for it to be stored. Instead genes if expressed properly form the fractal branching pattern that ends up growing those air sacs. No doubt a similar thing happens
for heart cells, liver cells etc. As the sensory organs increased in size and complexity more data needed to be processed, the number of neurons increased to the point where
the wiring diagram could not be stored in the genes. It needed to be learnt based on the input data. However there may be another cause that came first.

Even with a wiring diagram completely specified, there would be errors which could be fatal to the organism. Any kind of error correction ability that evolved even before the sensory organs
grew too big to be handled by about 300 neurons would have a definite survival benefit. Such error correction would allow neurons to automatically link up in the correct manner.
With error correction in operation, parts of the genome used for wiring could be freed up for other uses.
The amount of sensory data to be processed could then be handled just by increasing the number of neurons, without storing a wiring diagram rapidly growing in size.

When such an event occurred it was significant in the evolution of intelligence and arguably life itself. Whether or not you call the roundworm intelligent, it is clear that
the neurons even in a small part of a ferrets brain show a level of intelligence clearly and significantly greater than that of a creature with fixed connections between its
neurons.

Conclusion

Defining and talking about intelligence can be a difficult thing to do, however like many other things it can be studied at a more basic level, which makes discussing and defining it
easier. The experiments involving rewiring sensory input into a part of the brain it is not normally connected to have profound implications for the understanding of how brains work,
the very nature of intelligence itself and likely AI. The evolution of brains that can automatically learn from input data rather than having their connections hard wired was a significant
step in the evolution of intelligence and life.

Many ideas in this post have been inspired by reading On Intelligence, which I will discuss further in later articles when talking about more advanced intelligence.

Comments

It is not exactly clear from the article what was needed to make this learning start, random electrical stimulation seems to have been all that it took. What was not needed was some high level instructions to tell the brain to avoid wall.

The input from the researchers ensured that the robot avoided walls rather than becoming one that seeks walls. If anything, the whole may argue that without some sort of "pain" from bumping against walls (something to avoid in a definition of "basic intelligence"), there is no autonomous learning, as there is no way to decide what should be learned.

You are of course encouraged to define your own terminology instead of arguing about ill-defined words like "what is intelligence", but you still seem more guided by the mystique of the word "intelligence" than by the need for new terminology (a need that has to be justified by arguing present terminology insufficient). In fact, looking at your definition, the words "advanced adaptation" or so may come to mind. Your definition seems to imply that C. elegans has little "basic intelligence", none if we count its evolution as "training", which is a step backward in my opinion.

OK perhaps the robot needed some sort of "pain" but what about the ferret? The auditory part of its brain was not told at all what to learn, it just picked up on the input signal from what I gather. This ability is what I regard as most important.I am not sure what the best terminology is for C. elegans, perhaps that should be called basic, I am most interested in neural connections that can change their wiring, adapt to new inputs and what is going on when that happens.

I do not see the ferret result being more impressive than the many years old camera-to-tactile-display taped to a blind person's back, which also leads to visual consciousness (far from 20:20; just the resolution of your back's skin).

The auditory part of its brain was not told at all what to learn, it just picked up on the input signal

Brains are very complicated systems with high intelligence already in place, all modules acting on modules that may have gone bad for whatever reason (accident, learning, mad scientist rewiring). I do not think it is justified to portray the experiment as if an auditory part of a brain in isolation inside a test tube learned to see, turning into a visual cortex, after an eye was thrown on top kind of thing. Why would the auditory center even want to see anything?Neural plasticity is very important when discussing higher intelligence, but you want 'basic intelligence'.

The ferret result is more impressive to me because the structures needed to process vision automatically grew in the auditory section.I agree that the brain is very complicated with high intelligence, but there still has to be some way that the auditory center rewired itself. I don't think it has anything to do with "want" but because I think neural plasticity is a quality demonstrated by neurons in a bottom up manner, not something caused by higher intelligence. I think higher intelligence is built on this ability, not the other way round. I expect that if you were to take the auditory part of a brain and put it in a test tube, those same structures would form and it would gain some ability to see. No doubt you don't agree?

If not, then how exactly did the rewiring happen? How did the "want" pass all the way down to tell the neurons to form edge detectors? The brain may be intelligent, but without some kind of bottom up organization, I don't see how it could even know that visual input was there. I expect the bottom up organisation helped, perhaps somehow meeting the higher level instructions in the middle.

What I am saying is that such neural plasticity is the foundation of basic intelligence as I define it. That is the ability for a system to automatically recognize patterns in its input without any feedback, "want" or motivation to do so. Are you saying that such a quality is fundamentally impossible for any system to have? If not then surely it would be a useful quality to have as it would be useful no matter what the later motivation was, say avoiding or banging into walls or whatever.

Now you may ask what the motivation could possibly be for such a system with no feedback. However that is successful prediction and it is addressed as part of the memory prediction framework of intelligence that I will discuss later. Its known for various reasons neurons fire in anticipation of events, and successful firing in anticipation of an event could be the feedback needed to train such a system.More sophisticated structures, for example object permanence rather than just lines and edges allows more successful prediction.

Yes I found this idea a bit strange and unbelievable when I first encountered it, however when presented with the relevant evidence, motivation and well reasoned arguments it starts to make more sense.

Perhaps, given signalling molecules triggering gene expression, ferret neurons attached to the visual nerve, which happens to be made from neurons all the way including the retina, will produce a sort of visual cortex, however, such molecular biology having turned out to be this way is hardly getting us closer to AI; it would be much less relevant than a neural network connecting skin tactile sensors to the visual cortex. Top-down versus bottom-up is missing the perspective that all sub-systems of an evolved structure are themselves evolved structures that do no more than survive in their environment. Patterns in isolation are not patterns at all without meaning relative to some "behavior" in the widest sense.These issues are far above what I would label "basic" and you should now identify analogous in artificial neural networks, otherwise your defining seems to deny that artificial systems have long achieved "basic intelligence" (which I hope is not your hidden agenda).

It seems we have different interpretations of the results, what is basic and what is important. I hadn't thought of signalling molecules triggering gene expression, this hasn't been suggested in the literature as far as I am aware, so maybe it was tested for? I don't however expect that was the cause of the visual patterns appearing.

I don't follow you why such patterns appearing isn't important. Having such an ability in an artificial neural network would be a great step forward. I also don't know what you mean by subsystems. If the ability to adapt to input is a general ability, like heart cells beating in time, where is the subsystem that needs to survive? Structures forming is just a consequence of neurons being neurons (with some help from genes yes, but that is not essential).

Patterns in isolation do make sense to me, prediction is behavior in the widest sense. If a system predicts the next input by making the relevant neurons that will receive that input fire in anticipation, surely that is behavior? If an untrained bunch of 100,000 or so neurons taken out of context automatically adapt to patterns, without training or feedback, then how can you call that behavior anything other than basic? (If that indeed happens) There are no more basic stages within that process to refer to as far as I can see.

Finally I have no hidden agenda regards artificial intelligence, the author of the book "On Intelligence" has founded a company called Numenta that is trying to get such behavior from artificial neural networks based on matching the structure of the network to that of the brain better than anyone has done before, assuming there is some sort of general "intelligence algorithm". I am following its progress with interest. I think this is a great approach and will lead to significant progress in such networks. If you had studied the literature on artificial neural networks recently you may be disappointed like me at the approach the field has taken, with little attempt to make them match neural structure and disappointing progress as a result. With the exception the work Numenta is doing, convolutional neural networks seem to be the best around, but academia now discourages work on neural networks preferring other techniques like Support Vector Machines instead. The feeling is that neural networks don't work whatever that is supposed to mean.

Sascha goes straight to the point, Thor. The experiment would be more convincing if you injected a totally alien kind of signal, instead of a ready-made visual one and looked to see whether the neurons made sense of it. Something of no interest to your average rat...

One related problem is that for all we know, the tempting idea of general-purpose "thinking tissue" may be correct but it may rely on information being tagged as to its origin and indeed re-tagged the more useful informatiin is extracted from it. Visual information arriving in an auditory region may still have the "visual" stamp on it.

I saw a documentary on the TV (yes, this is scraping the barrel!) about some birth "defects" where bits of the brain that are not normally in contact happened to grow across and fuse. The result was not meaningless synaesthesia but some incredibly high abilities in certain specific things. I think this supports the idea that information is not context dependent in the brain but is heavily tagged so any bit of the brain can process it.

The next question, though, is why this has any relevance to intelligence? It seems that it may have a lot to do with embryology and gene expression. It may be interesting and allow you to create an arbitrary classification of organisms which have nervous systems into those that are hard-wired and those that are flexible or again, those that tag information and those that don't. But intelligence is not cladistics. Maybe in your heart you agree with Gerhard that intelligence has to be biology-based? :) Okay, you're not insisting on rat-oriented goals for rat intelligence but you are suggesting that no matter how smart an animal may be, unless the TISSUES can organise themselves then the animal is not intelligent. That has to be wrong. It's fascinating if the tissues do organize themselves the way you say but no more than that. What would you make of a system which is just as clever but creates virtual connections in its software?

Newsflash:The captured alien seems to be quite intelligent and is asking about our culture and can it please have its antimatter back so it can go home before we blow the world up meddling with things we don't understand? But Thor Russell, Head of Research, won't agree that it is intelligent until he's done some surgical experiments on its brain to see whether the neurons can re-wire themselves. The only trouble is, it doesn't have a brain...

Wow! where do you come up with these pictures?Hmm, I'm not sure we understand each fully. To me the signal was a totally alien one because the auditory pathway was not expecting it. I don't see how it could have had some kind of visual tag other than that tag being the structure of the data itself.

I should have mentioned it earlier perhaps but experiments demonstrating the kind of general intelligence where basic features are worked out automatically have been done with AI now. The experiments done with artificial neural networks arranged in a Hierarchical temporal memory structure by Jeff Hawkins show the ability to extract basic features such as edges/lines in vision without being told what the data represents. As far as I am aware the latest model self learns the basic structure of the input without tagging as to whether the data is vision or sound etc. The point of the HTM is that there is great similarity from data coming in from all senses and such a structure works it out. It applies equally well to any data with such a structure. Here is the video:

http://www.youtube.com/watch?v=cCdbZqI1r7I&feature=related21 mins, 36 mins and 52 mins may be interesting.I guess I wasn't clear about how the pattern should be demonstrated. In biology it needs to be tissue as there is no distinction between software and hardware. For AI if the software figures out the structure then it is the same as if biology rewired it. Virtual connections are the same as biological ones, so just as intelligent.

It has a lot to do with intelligence because to me a system that can automatically work out what has taken computer scientists years to achieve is clearly a whole new level of intelligent! The more advanced neural networks we are now developing are starting to do this.

I will go into the similarities later, watching the video should give you some idea. The idea is that sensory data in the real world is ordered in a hierarchy e.g. phonemes, syllables, words, sentences.or lines/shapes/simple objects/more complex objects. The claim of the Memory Prediction Framework is that there is a general cortical algorithm or "intelligence algorithm" that is used in many places in the brain. There are great similarities in brain structure at the lowest level, the idea is that most sections are pretty much doing the same basic thing, but with different data and different levels of abstraction. Abstracting syllables from phonemes is similar to abstracting sentences from words and objects from lines.

Thor, oriention to external objects is not dependent on neurons: bacteria do it, seeking energy-sources. And signal-transduction is generic to multi-cellular organisms. Its also real, big science, growing exponentially for thirty years now and about to blow you off the map.

But the distinction between first-order systems which merely habituate and second-order systems which can "re-vision" and solve problems, can be clearly drawn, with some help from semantics and analytical methods:

Yet the obvious model - calculus of variations - is underdetermined, requiring "regularization", some schema or ontology of the object. For starters, this throws you back on signal-transduction and what is called causality in signals.

Thor, a phoneme is a sound-pattern and not a part of a morpoheme (syllable), which is a syntactic unit. If you don't have any semantics or semiotics or knowledge of Roman Jakobson's classic study of aphasia you are not competent in this area. And you can't claim that neurons are everywhere second-order units: the retina responds directly to colour, mediating blood-pressure, an autonomic function. Cf. http://vixra.org/abs/1110.0046

And you can't claim the neurons are essential either: plants have an equivalent function in their stems and some capacity to re-vision the sunlight around them. That's where Darwin did experimental work, on climbing plants. Ignore the science and you get Hirnmythologie, Brain Myth, promoting science fiction. At lest Steve Wolfram and Rudi Rucker are honest about that.

My day job is speech recognition so I sure know what I mean by phonemes. Plants can respond to sunlight, but they can't display intelligence in the way I have defined. In biology, neurons are needed for that.

1. A system
capable of performing calculations such as with neurons or a computer. We know
enough about brains and intelligence to know that the ability to calculate is
essential for a simple brain.2. The ability to adapt to new situations with
meaningful behavior.3. Some kind of pattern recognition skill.4. Nothing
to do with self awareness etc.

Well, that explains why we disagree. The mere fact that #1 requires a "simple brain" already places it well outside the scope of most evolved organisms. In short, you're already looking at a finished "product" so your question is already occurring too late. You need to look at "intelligent behavior" without brains (i.e. bacterlal quorum sensing).

For #2, you'd have to define "new" versus "novel" situations. After all, it's not entirely clear that humans can survive "new" situations any better than anyone else, and it's not entirely clear how much such situations are subject to behavior.

For #3, your use of the word skill doesn't help. Why do I need to recognize patterns? Isn't it sufficient that I recognize information from the environment? Again ... back to bacteria. If I can pick up sufficient chemical signatures to represents a "quorum" does that represent "pattern recognition"?

Regarding #4, I would simply ask, how can a biological system exist and NOT be self-aware? I'm not talking about consciousness. In short, the ability for bacteria to have quorum sensing [and different receptors for the same versus different species] indicates that there is an "awareness" that one bacteria is different from another. Similarly, how does an organism survive if it is incapable of recognizing its offspring from anything else? If you say that it can recognize offspring, then how would such a thing be possible if it can't recognize itself?

1. How do you explain the ferret results. They are totally not what I expected your theory of intelligence to predict.

2. How do you define this seemingly magical "intent" or purpose you keep using in precise language. I have explained myself, but I don't think you can define such a term without it either taking on a soul-like non-physical quality or just being something simple like a feedback connection.

3. You seem to deny the very possibility of unsupervised learning because it has none of that "intent". I think you will find however that unsupervised learning is an essential part of higher intelligence. Think about the ferrets brain. No matter what behavior was desired from the new visual area, it is always the same thing that will be required from the lowest area. That is "make sense of the input data and pass up important features such as lines/edges etc to the higher level". This ability is what I mean by unsupervised learning, and it has already been demonstrated in the latest AI systems such as those developed by Jeff Hawkins of Numenta. It is obviously a very useful quality for a system to have, computer systems now do it, and I think there is overwhelming evidence that biological systems do too.

So to your objections:1. I don't agree with your criticism at all. If you can explain how a finished products works, and make it then criticism that the natural process to get there is apparently not well understood is just academic. You don't give a basic arithmetic test to someone who has just passed a calculus exam. People do understand how bacteria operate, I don't see any reason to look up the bio-chemical processes.

2. This should be completely obvious. Visual data going into an auditory section is novel because the auditory section is not already wired up for it.3. The intelligence I am talking about does require recognizing patterns. I made it clear that I was not talking about bacterial intelligence but higher intelligence. Recognizing an object does require recognizing patterns.4. Whats your point? I was talking specifically about self awareness. I don't see how your definition affects anything.

I have defined a level of intelligence without reference to biology, and demonstrate what I consider necessary and sufficient conditions for such a thing. If you automatically say that intelligence is undefined or unthinkable without biology then that is just word games, you have defined non-biological intelligence out of existence, not said anything worthwhile about it and there is no point in us discussing anything further.

I don't see why you are so concerned with the most basic level of intelligence e.g bacteria so much. We understand how that works and can get computers to emulate the behavior of such organisms. I am interested in systems that can learn through their life with some kind of hardware/software modification. You can give bio-chemical explanations for bacterial behavior, I don't see what they have to do with this discussion. Such work has already been done as far as I am concerned and however you define such intelligence whether it is adapting to the environment or whatever doesn't really affect what I am talking about. There is a major and significant difference when you go from a fixed number of neurons to many, so such definition probably will stop applying then anyway.

1. I don't agree with your criticism at all. If you can explain how a finished products works, and make it then criticism that the natural process to get there is apparently not well understood is just academic.

There isn't a single instance where we've explained a "finished product" in that sense and been able to produce something identical. Certainly we understand aerodynamics so we can produce aircraft. We can fly, but we can't produce anything that flies like a bird.

That's not splitting hairs, because if you want to build computers or robots that excel at a particular human skill that's fine and I won't dispute that capability. However, if you want to argue that they are intelligent, then that's like arguing that aircraft are simply a different type of bird.

2. This should be completely obvious. Visual data going into an auditory section is novel because the auditory section is not already wired up for it.

Why do you persist in thinking that the brain is pre-wired? The brain processes input data, and it doesn't particularly care where it comes from. The fact that it can adapt and modify itself to accommodate different kinds of data, should make it even more clear just how vastly different the brain is from any engineered human product. It would be like manufacturing a computer chip that is capable of replicating itself and creating a new instruction set when it is appropriate.

This is the problem when I hear you describe designing a vision system. The eye doesn't see, the brain does. Images aren't somehow "projected" in your brain to be analyzed. That's why you think pattern recognition is so important. However, in biology it's the anomalies. The brain in animals is capable of recognizing the absence of pattern. The "patterns" that shouldn't exist. That's how animals detect camouflage. Even human facial recognition isn't based on seeing the pattern. It's based on seeing the differences.

Again, there is ample evidence from those that suffer from brain defects that clearly illustrate that their ability to recognize objects is unaffected, but they simply don't know what they are. Such as the man that sees an elephant and calls it a dog. No whether you want to argue that this is different and relates to human language processing, etc. I don't disagree, but it clearly illustrates that there is much more variation in this area than you realize.

3. The intelligence I am talking about does require recognizing patterns. I made it clear that I was not talking about bacterial intelligence but higher intelligence. Recognizing an object does require recognizing patterns.

What patterns? Auditory? Heat? Smell? Touch? Explain what patterns you think are so important?

4. Whats your point? I was talking specifically about self awareness. I don't see how your definition affects anything.

Ironically self-awareness is the one thing you claimed you didn't need and yet it is the one thing that is universally present in biological organisms. You aren't going to have much of an intelligence if it isn't capable of recognizing itself apart from its environment or others that it encounters.

1. This is going nowhere you will just define whatever a computer does as not intelligent.2. I absolutely said the opposite that the brain was pre-wired! I said it wired itself to a significant extent to match the input data. Didn't you read this?"The fact that it can adapt and modify itself to accommodate different kinds of data, should make it even more clear just how vastly different the brain is from any engineered human product." Thats pretty much what I am saying. The quality you describe is unsupervised learning. The brain does this and this is how intelligence is demonstrated. Computers cannot do that at all well, however we ARE learning slowly how to make computers do such a thing, and the more they do so, the more intelligence I will credit them with.

I am not sure what you mean by the brain seeing, of course the brain can be said to see. You can't recognize the absence of a pattern unless you first can recognize a pattern. Surely thats obvious. I don't really think you understand what I am saying with much of this.

Yes the person calling an elephant a dog probably has a language problem, and I never denied variation in human experience. I am talking about the building blocks of such experience, not trying to explain it all.

3. The patterns in sensory input of course, thats what I was talking about. recognizing lines/edges the how they make up objects.

4. Self awareness is present in a bacteria! You have to be joking. Thats one mighty strange definition of self-awareness if it is. You pretty much just have to tautologically define alive to be self-aware to get that result, then of course you have defined AI out of existence because it isn't alive again. You also have to tautologically define part of a brain of an organism not to be intelligent because it can't recognize itself also. Everywhere I look, you have pre-defined intelligence in such a way that AI is unthinkable.

1. This is going nowhere you will just define whatever a computer does as not intelligent.

Do you know something I don't, because they aren't intelligent. Now if you want to demonstrate that they are, or have some insight by which they can be, then I'm all ears. However, if you're only arguing that MAYBE and we MIGHT, then you've got nothing but speculation and I'm saying that I don't believe its' possible.

...recognizing lines/edges the how they make up objects.

OK ... the vision thing again.

4. Self awareness is present in a bacteria! You have to be joking.

Yeah, I'm joking. I've got more references if you'd like. What you fail to recognize is that any organism that is incapable of recognizing itself can't recognize offspring. It can't recognize others of its own species. If you can't recognize offspring, then you can't differentiate them from food. It would be the single most wasteful self-destructive adaptation biology could have originated. Therefore it is a PRE_REQUISITE that all organisms are capable of recognizing themselves [not in any conscious way]. However, it is equally significant that bacteria are also capable of recognizing those that aren't of their species. That latter would make no sense without a reference point of what "self" is.

More to the point, it would be impossible to form a multicelled organism. Anyone that has ever considered organ transplants recognizes how "territorial" the body is regarding foreign tissue. Don't tell me that such cells are incapable of recognizing themselves.

Everywhere I look, you have pre-defined intelligence in such a way that AI is unthinkable.

In my view it is. More telling is that you haven't improved on that argument. However, the problem isn't with my definition. I would argue that it is with the lack of definition regarding what AI is trying to achieve.

1. Why would anyone think that a machine emulating human behavior is anything but a simulation? If it can't even recognize itself as not being human then what's the point?

2. The proponents of AI uniformly fail to describe what this new "species" is supposed to be.

======================================================

However, let me get to the central issue and, in my view, you'd better hope I'm right. The production of AI as you and others have discussed is about as unethical as one can imagine. To truly create an intelligence that is grounded in nothing more than servitude towards humans ...

You talk about creating intelligence and in the same breath want to deny it the ability to think freely, because it has no basis for recognizing or utilizing such freedom.

It's a convoluted mess that likely won't ever work, and if it does would prove to be catastrophic.

========================================================

Now, if you only want to talk about building more sophisticated calculators. Then have at it.

1. You havn't thought about my definition. Its about how systems behave, without reference to what it is made of. Intelligence is the ability of a system to demonstrate finding patterns it hasn't been trained to. This has nothing to do with your value judgments on what is simulated and what isn't.I don't see what you point possibly is about being unethical if any AI is just a zombie...

Talk about freedom doesn't apply to the simple systems with basic intelligence I am talking about. I am not suggesting we make mechanical humans.

I don't see what you point possibly is about being unethical if any AI is just a
zombie...

I don't either. You're the one proposing that they aren't.

Intelligence is the ability of a system to demonstrate finding patterns it
hasn't been trained to.

You keep alluding to these vague traits, but the question remains ... to what end? Why recognize patterns? You obviously have some purpose in mind, so whose purpose is it? Yours or the AI? If it is yours, then I would maintain that you haven't built anything intelligent, but merely something that can do things that emulates what you already do.

If your AI is capable of performing on its own behalf, then you begin to introduce the ethical questions I've raised.

After all, is it really necessary for my calculator to "understand" mathematics? You are arguing that there is something beyond simply performing tasks. You want this thing to be "intelligent" and capable of acting on its own volition. Therefore, I ask again ... if you really mean that, are you prepared to concede it the freedom to truly act on its own volition? If not, then why pursue creating it. If yes, then the ethical questions become relevant.

I am not suggesting we make mechanical humans.

Of course you are. You certainly aren't suggesting that we make worms. You aren't talking about making a dog or a cat. You're talking about what we commonly consider to be "intelligent" life. I'm pretty sure you aren't thinking about a Roomba.

It seems like this intent thing is a permanent sticking point, you need to attribute some intent thing for something to be intelligent. Asking whether it is free to act on its own behalf is pretty much undefinable and irrelevant to me. The behavior of a simple unsupervised learning system does not have intent, cannot act on its own behalf, or have emotions, but I consider it to be intelligent. I am not suggesting we build machines capable of emotion. Such is not necessary for things like self driving cars etc. Something can be intelligent without being alive.If you don't agree thats fine, but thats pretty much the end of the discussion as far as I am concerned now.

1. How do you explain the ferret results. They are totally not what I expected your theory of intelligence to predict.

Not at all. The brain is prepared to receive signals and process them. Your arbitrary division between auditory and visual exists only in your perception not in the brain.

2. How do you define this seemingly magical "intent" or purpose you keep using in precise language. I have explained myself, but I don't think you can define such a term without it either taking on a soul-like non-physical quality or just being something simple like a feedback connection.

There's nothing magical about it. Instead of "intent" you can think of it as "directed" or "controlled". At a very fundamental level this is precisely the chemistry we see taking place. Cell division isn't random. You'd have no problem describing the purpose of respiration or digestion, so intent clearly isn't magical even in your own mind.

You may think it's simply a feedback mechanism, but it's considerably more complex than anything humans have ever built. The brain is capable of processing all manner of input signals and constructing a "reality" from those signals. It isn't the signal that determines the result, but the brain.

3. You seem to deny the very possibility of unsupervised learning because it has none of that "intent". I think you will find however that unsupervised learning is an essential part of higher intelligence.

What does that mean? There is no "learning" except within the context of the organism. Ultimately reality is constructed from mapping the organism based on the inputs and generating the necessary responses to changes in those inputs. When we get to the "higher" animals we are talking about even more sophisticated mapping processes which begin to include concepts like abstraction, but they are still ultimately linked to the mapping of the organism. That's where we derive our sense of self.

This is clearly exemplified when we look at people that suffer from brain damage or faulty inputs, such as that associated with "phantom limbs". The brain is still mapping the original appendage, but physical reality no longer tracks with it. No amount of "intelligence" will change that reality. This is even more clearly illustrated with conditions like Capgras Syndrome where the individual lacks the emotional connection to the visual stimulus and concludes that the people he knows must be imposters. Despite the obvious irrationality of such a claim, it is a persistent view in those that suffer from it. These are all clear indications that the brain/body connection isn't a simplistic feedback control mechanism and it certainly isn't based solely on processing inputs and arriving at some logical conclusion.

Well you seem to be changing the goal posts a bit, now talking about higher level intelligence and constructing reality. 1. You said before:"the concept of "intelligence" becomes a jumble of nothingness when it is paired with the wrong body." I would say that putting a visual signal into a auditory region is similar to pairing a brain with a different body. You seem to be changing your tune to me. You still need to explain how those signals are processed effectively. The data is definitely different and there needs to be a process to make sense of it. Some kind of unsupervised learning seems to be the most likely way. Where is this "directed" or "controlled"? I say its bottom up without needing higher level direction. If not, then how do you explain the mechanism of how this "direction" causes the visual patterns to appear in the auditory area?

2. You seem to be including everything from chemistry to feedback to constructing "reality" in it. Define the various stages, how they are different and where they apply. Cell division and constructing "reality" clearly aren't the same thing. Why is such a concept necessary for understanding intelligence at the level I discuss? There is no problem with my model of definition as far as I am concerned.

"It isn't the signal that determines the result but the brain" What is this even supposed to mean and how does it disagree with anything I have said?

3. "There is no learning except within the concept of the organism" Not true at all. Feed a visual input pattern into neurons and they automatically learn edges/lines etc among other things. This is universally useful.

"Ultimately reality is constructed from mapping the organism based on the inputs and generating the necessary responses to changes in those inputs"

Once again that sounds poorly defined and not correct in the way I understand it. What do you mean by mapping, and how is understanding supposed to happen if its just mapping inputs and generating responses. That sounds like the now discredited and out of date behaviorist school of thought in psychology. In fact it sounds like some strange combination of behaviorism with no understanding, but you also talk about constructing reality.

I would say that putting a visual signal into a auditory region is similar to pairing a brain with a different body.

Based on what? It's still the same body. The brain has the "responsibility" of mapping that body so if input arrives in a new or novel way than it tries to be as accommodating as possible. It's no different than when a individual goes blind and the emphasis shifts to the other senses. It's still the same entity being mapped.

Cell division and constructing "reality" clearly aren't the same thing.

Interestingly enough the cells responsible for constructing "reality" can't divide. Do you think that's significant? I'm not free-ranging here. Biology is ultimately a set of chemical processes that are taking place. These chemical processes become self-contained and regulated through the cell. Cells join together and utilizing mechanisms similar to bacteria are capable of sophisticated communications to enable multicellular organisms.

Therefore if your brain is composed of cells, which were differentiated from a single cell, then clearly cell division and the process of ultimately constructing "reality" are the same thing. This isn't semantics. There is no "brain" or "reality" without the body's sophisticated mechanisms for wiring the nervous system. This is controlled, but also relies heavily on Programmed Cell Death, to ensure that unmade connections or spurious results don't persist.

3. "There is no learning except within the concept of the organism" Not true at all. Feed a visual input pattern into neurons and they automatically learn edges/lines etc among other things. This is universally useful.

Only if you wish to concede that passing an electrical current across a computer chip is useful for understanding operating systems.

The moral of this story is that some parts of the brain are free to roam over the world and to map whatever sound, shape, taste or smell or texture that the organism's design enables them to map. But some other brain parts — those that represent the organism's own structure and internal state — are not free to roam at all; they can map nothing but the body, and are the body's captive audience. It is reasonable to hypothesize that this is the source of the sense of continuous being that anchors the mental self.These dedicated pathways and regions did not evolve so that we could build a mental self, but rather because continuously updated maps of the body's state are necessary for the brain to regulate life.Antonio Damasio, 2003, Naturehttp://www.mirallas.org/Raco_intel/MentalSelf.pdf

"so if input arrives in a new or novel way than it tries to be as accommodating as possible."
Yes and it tries by using unsupervised learning I have talked about. You make my point.
2. Speculating on cells dividing and reality isn't relevant as far as I can see.
3. You can do much more than the equivalent of pass an electrical current across a computer. The ferrets brain works out how to see, computer systems are already making progress in this. The low level inputs are passed up a hierarchy to give ever higher level information.
4. "The moral of this story is that some parts of the brain are free to roam over the world and to map whatever sound, shape, taste or smell or texture that the organism's design enables them to map."
Once again you make my point. Parts of the brain not used will automatically learn data that is input, without any "intent". Thats what neurons do, it is basic intelligence.
Speculation on the mental self I said I wasn't discussing. Whether that is ring/wrong won't disprove anything I say anyway.

Once again you make my point. Parts of the brain not used will automatically learn data that is input, without any "intent".

OK .. then define your terms. You say "without intent" and then proceed to claim a process that is intentional [i.e. learning].

If neurons don't respond for a specific purpose, then what would you call it? Random? If it's not random, then there is an order to it. If so, then you need to describe what the basis for this order is. My description is to attribute an "intent" which is to ensure the survival of the organism. To what end do you think biology produced neurons?

This "unsupervised learning". How can this occur without "intent"? As I said, you surely can't be suggesting that it is simply a random process. Perhaps I should also ask what "supervised learning" is.

Speculation on the mental self I said I wasn't discussing.

I didn't say you were discussing. You were the one that argued that the brain mapping discussion was a discredited theory in psychology. I'm merely pointing out that it is not discredited and alive and well in neuroscience.

"This "unsupervised learning". How can this occur without "intent"? "This "evolution" How can this occur without purpose ... It is random.Same argument.

Such learning is NOT intentional. Saying it is is your position. In the same way that heart stem cells just beat in time, neurons wire themselves to find patterns in input and extract features. That is what they do, its difficult to stop them. Its pretty obvious why such a behavior would evolve, though even if there wasn't it wouldn't change the fact that they do it.We already have such behavior in artificial neural nets to a simple level so it doesn't depend on survival or biology.

Its pretty obvious why such a behavior would evolve, though even if there wasn't it wouldn't change the fact that they do it.

You're claiming two different things. On the one hand you want to make their behavior "random" and something they possess as an intrinsic attribute, and then you want to claim that it wasn't always so, but it evolved. In fact, you use the phrase that it is "pretty obvious", when clearly no such criteria could exist if it were random. You're attributing a particular "intent" to such evolution which is to produce more and more refined processes from a simpler set of initial conditions.

Such learning is NOT intentional.

OK. As I've said before, then how would you describe it. If you insist on using "random", then as I said before, you have a interesting definition of the world "random".

No I am not saying its random at all. Not intentional does not mean random. You really don't understand the concept of unsupervised learning. I think that is the whole problem, and where this whole intent thing comes from.

Why is it so hard? Heart stem cells beat in time by themselves without having to worry about intent etc, oscillators oscillate without having intent, objects fall without intent, neurons learn patterns without intent.

... and I could ask the same of you. The problem is that you get too reductionist and then claim there is no "intent". So the transport of sodium or potassium conveys no "intent" to you and yet this is precisely what occurs in order to transmit a nerve signal. Would you argue that the signal is not "intentional"? More importantly when we have a collection of such cells making up a brain, are you saying that having thoughts is not "intentional"?

After all if the final product is capable of "intent", then it is hard to argue that the constituent parts that enable it to occur don't similarly fulfill that requirement.

"We risk our lives without a moment’s hesitation when we go out on the highway, confident that the oncoming cars are controlled by people who want to go on living andknow how to stay alive under most circumstances."http://files.meetup.com/12763/intentionalsystems.pdf

Which is precisely why your "intelligent self-driving" car will never be consider intelligent.

Would I argue the signal is not intentional? Yes, calling it intentional is pointless and only adds confusion. If you know what a signal is and how it works you don't need to add ill-defined names to it. Calling it intentional is the same as calling gravity intentional. Saying something is intentional just because it is part of life is vitalism clear and simple. "Intent" is ill defined, clearly means different things to you when talking about signals vs thoughs and is ultimately meaningless the way you are using it. It springs entirely from your irrational dislike of apparent reductionism as far as I can see.

Calling it intentional is the same as calling gravity intentional. Saying something is intentional just because it is part of life is vitalism clear and simple. "Intent" is ill defined, clearly means different things to you when talking about signals vs thoughs and is ultimately meaningless the way you are using it. It springs entirely from your irrational dislike of apparent reductionism as far as I can see.

No. It stems from the fact that these processes aren't simply arbitrary. They are not simple patterns. There is NO basis for why they should exist. Your comparison to gravity is silly.

My use of the word "intent" is not to explain, but to draw attention to the problem. Phrases like vitalism illustrate the misunderstanding.

It is obvious that you want things to have a purpose, but deny that they exist to satisfy that purpose. So when a signal propagates and is interpreted in the brain as sight, there is no intent. There is only the arbitrary collection of signals which produce the happy coincidence of sight.

You want it both ways. You want to argue that attaching the ferret's neurons to a different part of the brain is significant because that part "isn't wired for it", and then argue that the development of partial vision isn't "intentional" on the part of the system to recover from this novel connection.

Apparently everything is just a series of happy coincidences in which the desired results always satisfies a necessary condition that is "intended" without "intent".

BTW ... I'm not entirely clear on whether your ferret's neurons resulted in actually rewiring or simply in a phenomenon known as "blind-sight".

I am sorry you still don't understand my point. You don't get unsupervised learning. I'm not having it both ways. I have explained things fully, if you understood more about signal processing etc you would get it. I am not going to "un-understand" things I already understand from discussing further.I stand by my analogy. Someone I spoke to understood a reasonable amount about how evolution worked, and couldn't fault it. They knew the evidence and how it could work. In spite of actually understanding it, they said "but it has no purpose, therefore cannot be right". Now I said it didn't need the kind of purpose they talked about (which was of course ill-defined) as they already understood it, but they said it did. I then tried to explain that it did have a kind of purpose which was of course a waste of time because it didn't fit their vague definition. I'm not going there with you.

Recognition in Gerhard's sense is more substantially developed as an immune function, nothing to do with neurons. And it works by recognizing patterns of cell and virus surfaces. Immune intelligence is a primary intelligence by precisely Thor's criterion of pattern-recognition.

As for the poor ferret, bats navigate by echo-sounding, and 3D sound works by spatial echo effects. At normal temperatures air is full of phonons: the face is particularly sensitive to them, and any sensitivity is enhanced with practice. Hence also phenomena like claustrophobia and agoraphobia, which then resemble athsma.

It seems clear [although not obvious] that we have a different set of working definitions for much of what we're talking about. In part, I'm not clear on why it seems so important to label machines as "intelligent". Why should it matter?

There have been several comments that allude to "recognizing" such intelligence, but what difference does it make, unless one wants to introduce the ethical issues I raised earlier.

Perhaps the sticking point is that when I hear "intelligence" [especially in the context of AI], I hear "sentience". If that isn't the intent, then we're merely quibbling over the specific definition of "intelligence" which is obviously vague enough to allow such differences. In addition, it would also make the notion of machine "intelligence" uninteresting. After all, why worry about a label at that point?

The point is not about labelling machines as intelligent, thats just a symptom of the problem to do with what unsupervised learning is and how intent is not needed. I don't think intelligence is sentience. Once again if you define it to be, then you have defined AI not to be intelligent before you even start. I have clearly defined basic intelligence NOT to be sentience in this blog.