Pages

Thursday, 17 May 2012

The philosophical baby: What children’s minds can teach us about the big questions

Here’s an event tonight worth putting on your winter coat for. A lecture on what children’s minds can teach adults.

Until recently, researchers thought that babies and young children were irrational, egocentric and amoral. But the last 30 years of scientific research has completely overturned that view - in some ways children are smarter, more caring and even more conscious than adults are. This new view of babies and young children has brought new and sometimes startling insights about some of the Big Questions of philosophy: questions like How can we find the truth? Where does consciousness come from? What is the nature of morality?

Professor Alison Gopnik from California is a world leader in this research, which she presents in Auckland in three lectures tonight and next week.

Babies aren’t just defective adults, her research shows. Children are for learning she says, and—and this might surprise you—baby’s minds are the most powerful learning machines on the planet. This confirms some of what Ayn Rand observed, based on Maria Montessori’s work:

At birth [observed Rand], a child’s mind is tabula rasa; he has the potential of awareness—the mechanism of a human consciousness—but no content. Speaking metaphorically, he has a camera with an extremely sensitive, unexposed film (his conscious mind), and an extremely complex computer waiting to be programmed (his subconscious). Both are blank. He knows nothing of the external world. He faces an immense chaos which he must learn to perceive by means of the complex mechanism which he must learn to operate. If, in any two years of adult life, men could learn as much as an infant learns in his first two years, they would have the capacity of genius. To focus his eyes (which is not an innate, but an acquired skill), to perceive the things around him by integrating his sensations into percepts (which is not an innate, but an acquired skill), to coordinate his muscles for the task of crawling, then standing upright, then walking—and, ultimately, to grasp the process of concept-formation and learn to speak—these are some of an infant’s tasks and achievements whose magnitude is not equaled by most men in the rest of their lives.

What’s it like to be a baby? Says Gopnik, “It’s like being in love in Paris for the first time after you’ve had three double espressos.”

So maybe instead of getting children to be more like adults, we adults should become more like children.

26 comments:

Gopnik has jumped on the Bayesian Epistemology bandwagon. Bayesian Epistemology says that we can evaluate the truth of theories by weighing probabilities. But that is profoundly wrong. Probability is a physical concept and so it belongs in the domain of physics and not in the domain of epistemology. To apply probability theory to epistemology is a category error.

In reality, we evaluate theories according to the explanations they contain. What is important is content, not some probability associated with the content.

Bayesians like Gopnik think that the distinction between human and animal learning is one of degree. She thinks animals have some ability to learn (e.g., crows). But there is a very sharp distinction between humans and animals, as Ayn Rand was well aware.

Humans are universal explainers and it is our ability to create explanations that sets human learning apart from animals for animals have no ability whatsoever to explain anything. All animals can do is fill in parameters on algorithms that were handed to them by their genes.

Gopnik also thinks that learning in babies involves something special. But it does not: it involves exactly the same processes that adults use. The main difference is that the damaging effects on creativity of coercive child-rearing practices have had longer to run their course by the time one is an adult.

I didn't pick up any reliance on Bayes in last night's lecture. And while she certainly recognised a distinction between animal and human brains she didn't identify the difference as being our possession of a conceptual faculty--which is what explains the essential difference in both our brain development and our use of it.

Mind you, even if she's fifty years late (in the case of Rand), one-hundred years late ( in the case or Montessori), or 2400 years late (in the case of Aristotle) in this day and age it's astonishing to see someone linked to the philosophy faculty at Berkeley talking about causality as if its existence were uncontroversial, about humans' ability to understand the world around them as if it were never a matter of debate, and the fact that all knowledge comes through the senses, as if alleged philosophers had not been battling the obvious for two millennia.

But in any case, you err, sir. Learning in babies certainly is something different. If we could emulate their ability to absorb, we would be geniuses. But we can't, so most of use ain't.

The difference is nothing to do with "coercive child-rearing." The difference is in the brains.

Maybe she'll bring up Bayes in the subsequent lectures? I wouldn't hold too much hope for the philosophy faculty at Berkeley just yet!

The idea that children have some special ability to learn is tied up with the notion of critical periods. But if you do a quick google you'll see there is a lot of controversy about whether they actually exist. Do you have a (philosophically sound) scientific paper defending critical periods which you stand by?

Gopnik is right in that children learn using exactly the same processes that scientists use to develop and test theories. In fact all human learning involves these exact same processes. The reason is that there is no other way to learn.

Also, if you have the ability to learn you have the full universal ability: you are capable of learning anything that can be learnt. There is no such thing as a learner that has only a partial ability to learn. You either have the full ability or you do not. This is tied up with the nature of universality, which doesn't come in degrees. For example, in the context of universal computation, there is no such thing as a computer which is 50% universal.

So, babies have exactly the same universal ability to learn that adults have. They cannot be more or less universal than an adult.

And an adult doesn't lose any ability to learn. What happens is that they learn to dislike learning and they no longer have the time or motivation. The difference is ideas and circumstance, not brains.

Note that Rand doesn't attribute the difference to brains. That would imply that babies have innate abilities that adults do not have. But she explicitly repudiates innate abilities in the quote you put up (and elsewhere too). According to her, babies have no innate ability to walk or to focus their eyes and must learn these things without any genetic assist whatsoever.

So Rand is not saying the difference is brains (genes), the difference must be to do with ideas and knowledge you have acquired. An adult genius is not a genius by good fortune of having the right genes but by having acquired better knowledge than the rest of us. Including better knowledge of how to think.

BTW, I'm surprised by the number of Objectivists who think genius and talents are innate to some extent or other and think that Rand could not have meant what she said about no innate talents.

Yet, Rand's view is to take the highest view of what it is to be human. We are all capable of soaring, it just takes the right kind of nurture.

No one is born with any kind of “talent” and, therefore, every skill has to be acquired. Writers are made, not born. To be exact, writers are self-made.

Talents include talent to learn. If babies and infants have critical periods that give them special talents to learn then where did this special talent come from: it can only be from genes.

Adults who put their mind to it in fact learn faster than babies. The reason is that they have much more accumulated knowledge. Languages seem harder for an adult because adults want to express more complicated ideas.

In the realm of the mind, the underlying details of the hardware are irrelevant: they have been abstracted away. What is important are ideas. Ideas give us our power to learn and to think. How can a baby have better ideas about how to learn and to think than an older person?

I would have thought you would welcome this. It is inspirational whereas your view Is much more limiting.

A brain still being "wired"--in Montessori's words, an absorbent mind--is a very different brain than one in which myelinisation has already taken place.

This doesn't explain how an infant brain has the special learning abilities you maintain it does. Being good at learning means having good knowledge about how to learn. So how does myelinisation (or lack of) give one that knowledge? Does it make one more creative? How?

Your assertion has similar logic to thinking certain drugs make one more creative. But how do drugs contain the knowledge to affect human behaviour like that?

@Brian S: You seem to be laden with misconceptions, including the strange notion that the brain is whatever you want it to be.

I'll respond to just a few of your more outlandish claims.

** You say "Maybe she'll bring up Bayes in the subsequent lectures?" Maybe she will and maybe she won't. Maybe she'll also fly in on a broomstick singing "Brian S. doesn't even know me." Because "maybe" is not evidence. It's just speculation without evidence. Which is to say it's just a purely arbitrary claim. Which is to say it's no more than noise.I say all this only because you seem to demonstrate in other comments that you think your "maybes" adduced without evidence are the beginning of knowledge.

** You say "In the realm of the mind, the underlying details of the hardware are irrelevant." What an utterly preposterous claim. It is the nature of the brain that allows the brain to function as it does. Change it, damage it, and you change the way it functions. Understanding what it's made of and how it's made up is teh very key to understanding how it functions.

** You say "I would have thought you would welcome [the idea that brains are whatever I say they are]. It is inspirational whereas your view Is much more limiting."

There's no kind way to say this. This is total bollocks.You must understand that the brain is not whatever you want it to be; the brain is what it is. This is not a "limit"; it is reality. And the reality is the child's brain and the adults brain are what they are. Which is very different.

* * You say "If babies and infants have critical periods that give them special talents to learn then where did this special talent come from: it can only be from genes."

What nonsense. An infant's brain is not "good at learning" because the baby has some prior knowledge about how to learn. The baby absorbs dats because the baby has an absorbent mind allowing it to absorb by wholesale all the sensory data about the world around her: this provides the data integrated by the brain to begin forming knowledge, and at a rate not possible to adults. To put it bluntly, "It's the hardware, stupid."

** You say "Being good at learning means having good knowledge about how to learn. So how does myelinisation (or lack of) give one that knowledge?"

Myelenisation does not "give knowledge." (Sheesh!) Myelin is a white gelatinous substance that begins forming a "sheath" around neuron axes to protect the integrity of signals from one part of the brain to another. The process begins about fourteen weeks after conception and continues until adolescence. It is this process that explains so much of the developing brain. Brain that is myelinated is functional; brain being myelinated is being ordered.The whole process is part of the ongoing neuroplasticity of the brain.

** You say "The critical period hypothesis provides us with excuses."

I have no idea what you mean by your "critical period hypothesis." But I suspect again your problem is that facts are getting in the way of your wish to have things any way you want them to be. The fact is that as the brain develops the child has what Maria Montessori identified as "sensitive periods" during which particular areas of learning are especially easy. Neuropsychologists now beginning to understand that what underpins the sensitivities she observed are the result of changes occurring in the brain as a result of ongoing neuroplasticity. To read more about this, including the research on the neuropsychology of sensitive periods, I recommend Angeline Lillards book "Montessori: The Science Behind the Genius".

But in any case, I doubt that the metaphor of "running programmes" works, even in jest. Especially not with a baby's brain.

Because the difference between a baby's brain and adult's brain is not like the difference between a Mac and a PC. It would be more like the difference between a computer and something growing into a computer.

Or the difference between a sponge, and something growing into a totally neurally wired network.

#1) "Neuroscientists Identify How the Brain Remembers What Happens and When"http://www.sciencedaily.com/releases/2011/08/110804141701.htmBased on original peer review : ["Integrating What and When Across the Primate Medial Temporal Lobe"]

#2) "What Do Infants Remember When They Forget?"http://www.sciencedaily.com/releases/2011/09/110927155220.htmBased on : ["What do infants remember when they forget? Location and identity in six-month-olds’ memory for objects"]

#3) "Forgetting Is Part of Remembering"http://www.sciencedaily.com/releases/2011/10/111018111938.htmBased on : ["The Benefit of Forgetting in Thinking and Remembering"]

#4) "New Research Shows That We Control Our Forgetfulness"http://www.sciencedaily.com/releases/2011/07/110705091115.htm

There are obviously more articles on the topic from the sciencedaily website.

Speaking metaphorically, [at birth, the child's mind] has a camera with an extremely sensitive, unexposed film (his conscious mind), and an extremely complex computer waiting to be programmed (his subconscious).

She doesn't speak of a child's brain growing into a computer. She recognises that the computer is already there at birth. It is just waiting to be programmed.

A young child's brain is already fully universal and in more than one way. It is running a universal knowledge creator (software) on a universal computer that is instantiated in the brain.

Without these types of universality the child would not be able to learn anything. Do you see this?

And do you see that there are different levels. The hardware implementation of the universal computer is abstracted away by the software of the mind. Like a computer program abstracts away the details of its implementation on a particular machine.

The thing that changes as a child grows is software as the mind programs itself. Concomitant with this are changes in the brain, as one would expect. But you can't understand what is going on by looking at the level of neurons: you have to look at things from the level of the mind (software).

* As well as not noticing what Rand said, it seems you did not watch the TED talk you linked to. Yes, that is Bayes she is talking about.

I have no idea what you mean by your "critical period hypothesis." But I suspect again your problem is that facts are getting in the way of your wish to have things any way you want them to be. The fact is that as the brain develops the child has what Maria Montessori identified as "sensitive periods" during which particular areas of learning are especially easy.

Um, well, apart from a change of emphasis to "sensitive" that pretty much is the critical period hypothesis:

http://en.wikipedia.org/wiki/Critical_period_hypothesis

(you could have googled it BTW).

If you read the wiki page, it's clear that critical/sensitive periods are highly controversial.

Attributing a child's ability to learn to a "sensitive" period which makes learning "easy" is demeaning to the child. It diminishes their accomplishment,

Professor Alison Gopnik's comment in that T.E.D. talk video saying that Bayesian learning is the best machine learning (ML) scheme/algorithm available today. The best & more popular ML algorithm today is the SVM (support vector machine). SVM can learn both in a linear & non-linear manner, which is actually closer to how human brain works. The brain process information both in linear & non-linear mechanisms.

Her error is not a big deal anyway (ie, its unimportant - very minor) because she's a neuro-scientist specialist and not a machine-learning expert (which is my domain).

Brian S said...In reality, we evaluate theories according to the explanations they contain. What is important is content, not some probability associated with the content.

Can you give an example here? I mean a concrete example of something which shows a task that one does or performs which clearly describe your scenario. The content you're talking about is simply probability as Prof. Gopnik was talking about in how that kid was performing experiments (bliket detector).

FF, consider Objectivism, for example. How does one even begin to assign a non-arbitrary prior probability that it is true? And why would you want to? You evaluate Objectivism according to the explanations it contains. Like its explanations about altruism. If you find a flaw in one of the explanations then that explanation is not true. Truth is binary, it does not come in degrees. An explanation that is false in reality cannot have any objective probability of being true.

Right now you are thinking about my paragraph above trying to see if it makes sense. Are you calculating probabilities?

If you are interested, read the Choices chapter in _The Beginning of Infinity_ by David Deutsch (who, as you know, is one of my favourite philosophers). There is also a gmail list:

In computer science and quantum physics, the Church–Turing–Deutsch principle (CTD principle) is a stronger, physical form of the Church–Turing thesis formulated by David Deutsch in 1985. The principle states that a universal computing device can simulate every physical process.

I was going to explain why it is important to this discussion, but Deutsch himself has just put up a post on The Fabric of Reality list that covers my point, so I'll just quote that instead:

On 21 May 2012, at 2:26am, Colin Geoffrey Hales wrote:

> Hi FoR folk,>> I thought you might be interested in the first signs of the impact, on science itself, of 20 years of a 'science of consciousness'. The ignition point for the change is obviously in the neurosciences.

I think that isn't obvious. It depends on the answer to this question:

Is a computer program that is accurately emulating a conscious process, necessarily conscious?

If no, then it seems to me you have a *philosophical* problem reconciling that with the Turing principle (that a universal computer can perform any information processing task that any physical object can perform). And neuroscience could only begin to be relevant once that problem is solved and we know why the process of introspection to detect whether one is experiencing qualia isn't an information processing task (or how the Turing principle is false -- which would at a minimum require overturning quantum theory, as Penrose hopes to do).

If yes, then consciousness is an attribute of computer software only, not hardware (provided it is universal) and therefore neuroscience has nothing whatever to tell us about consciousness, so again the problem of consciousness is a purely *philosophical* one.

Do you have an obvious way round this dilemma?

-- David Deutsch

It is the same for things like intelligence and learning. PC's position that "It's the hardware, stupid" is therefore untenable. The hardware is important only in that it is universal.

Brian, what is learning? (Choose anyway you wish to define it, be it the psychologist's, information theorists, mathematician's definition, etc,...). The wording of the defintion of learning may be different in semantics from different disciplines, but they all boil down to one meaning only.

Brian said...In reality, we evaluate theories according to the explanations they contain

That very explanations you're quoting above is prior probabilities the very same thing that Dr. Gopnik's description of Bayesian learning. Bayesian algorithm is a supervised learning type algorithm (ie, learn by example or learn from a teacher or learn from a coach), ie, learning by induction.

There's no escape from that. You can't have explanations until the event (where one is trying to learn) has already happened and to note of what are the outcomes that have been produced. You can't explain the outcome/s of an event unless it has taken place as apriori, otherwise one can have psychic power to foresee the possible outcome/s of a future event before it even takes place. That's why bayesian learning requires conditional (prior) probabilities to be known in advance.

That's exactly of what that kid was doing in performing experiments & evaluating hypothesis (bliket detector). He didn't foresee the correct way via psychic power before he even started. His evaluation was based on trial and error , ie, he needed to fiddle (event) with the blocks first before he evaluates the outcome (continue on with the current hypothesis but try and improve the conditional probability of the event or dismissed his current hypothesis and formulate a new hypothesis and then try again to see if the conditional probability is higher compared to previous attempts). He won't keep going on forever. Once he reached a solution then he will stop (generalization is being achieved).

As I have stated above, we humans don't use a calculator in our daily life to compute bayes probabilities every time we're evaluating a hypothesis. That nature is already being built in to how our brains work and that mechanism seems pretty much universal.

Brian said...Learning is the process by which we acquire new *knowledge*. It involves using creativity both to generate ideas and to try to find mistakes in them.

Semantics is correct but that's not a formal universal definition to be used as generalization, which you seemed to favour (ie, universal explained).

I'll give you a hint, since you're a mathematician.

Learning is simply a function relational mapping.

X - inputs (can be multivariate)

F(X) - functional relation that takes X as input

Y - output (can be multivariate)

F maps input X into output Y. If changes in input X leads to zero changes in output Y, then there is no learning at all taking place. On the other hand, if change in X leads to change is Y, then there is learning in the system being taking place. Knowledge comes from learning (storing & retrieval). See if there is no change in Y when X changes, then there is no knowledge to retrieve because there wasn't anything to store in the first place.

This is the formal definition given in the mathematics and machine learning literature. This is how the information processing in the brain is being modeled.

X and Y are physical inputs (neuronal electrical signals). The process of mapping X into Y is what learning & information retrieval process (memory) are about. The function F is not some sort of a coordinator, since neuron themselves are self organized, but their actions seems to obey a certain functional mapping relationship.

Brian, a good description of the formalism of learning process, be it animal, human or machine can be found in one of the leading machine learning researchers, Prof. Tom Mitchel, book with title Machine Learning.

There is evidence in recent years from experiments by researchers at IBM & Berkeley where they build some robots and when they observed those robots in the lab they did something very unusual. They think that the robots exhibit some sort of conscious learning because some of the robots perform actions that were not written to the original software, but because the robots were programmed with various learning schemes to allow them to adapt to their external environment by sensing it (input X) on the fly. The corresponding actions (output Y) are then observed to be unusual. Something they didn't anticipate.

When one learns an explanation, one is not learning a function mapping inputs to outputs. What functional relationships does one learn when, for instance, one learns about the explanations for why capitalism works?

I'm skeptical that any current computer program can learn. The reason is that there is an unsolved philosophical problem, namely an understanding of how creativity works.

Current "AI" programs do not create new knowledge, the knowledge is always somehow (maybe unintentionally) inbuilt by the programmer by their judicious selection of the fitness function or whatever. That may be the case in your example but I'd need to look at it more closely.

Because of the nature of universality, we won't approach an AI by degrees. We will go from no learning ability whatsoever to a fully universal learner in one jump. This is a consequence of the *reach* of knowledge and again I'd recommend "The Beginning of Infinity" if you are interested. You'll see there that a jump to universality can occur because of a small change in something (it doesn't require large changes).