Archive for February, 2010

Aspro Potamos has drawn my attention to the emerging series of YouTube videos, somewhat polemical in tone, on the War against Neuroscience. If you prefer something less forensic, you may like this series of podcasts on music and the brain from the Library of Congress.

I have been doing my own studies and research for thirty years now on three concepts: Mind, Vision and Reality. I felt it necessary to create novel notations, just as others, like Boole, did, because of the inadequacies of language as per this subject. I believe that I have come to an entirely new methodology when we seek to understand two fundamental issues when we study Mind, Vision and Reality and that is – both their Structure and the Function.

I have been deeply, fundamentally and existentially affected by Quantum Physics as a human being and in my own pattern of thinking and analyzing problems before me. I obviously understand full well that my work is unorthodox because I am not just presenting a study in one particular niche in these studies – although, of course, I have studied specific issues – but, I am at this stage in my life, in a position to say that I have come to a general theory that comprises an understanding of Mind, Vision and Reality.

That is why, when I have presented one study/manuscript it is often difficult to make any sense from it, and the reason here is because, those single manuscripts that I have been submitting, do not, in and of themselves alone, explain fully my general theory on Mind, Vision and Reality. I also know full well, that the history of science, shows far too many times, that when one researcher submits an entirely, unorthodox novel methodology in his thinking, he is quite likely to be rejected by the general established body of scientists and philosophers.

But, I still do try.

A sample is here; he particularly asks for views on two pieces here and here.

Share this:

Where has AI (or perhaps we should talk about AGI) got to now? h+ magazine reports remarkably buoyant optimism in the AI community about the achievement of Artificial General Intelligence (AGI) at a human level, and even beyond. A survey of opinion at a recent conference apparently showed that most believed that AGI would reach and surpass human levels during the current century, with the largest group picking out the 2020s as the most likely decade. If that doesn’t seem optimistic enough, they thought this would occur without any additional fundingfor the field, and some even suggested that additional money would be a negative, distracting factor.

Of course those who have an interest in AI would tend to paint a rosy picture of its future, but the survey just might be a genuine sign of resurgent enthusiasm, a second wind for the field (‘second’ is perhaps understating matters, but still). At the end of last year, MIT announced a large-scale new project to ‘re-think AI’. This Mind Machine Project involves some eminent names, including none other than Marvin Minsky himself. Unfortunately (following the viewpoint mentioned above) it has $5 million of funding.

The Project is said to involve going back and fixing some things that got stalled during the earlier history of AI, which seems a bit of an odd way of describing it, as though research programmes that didn’t succeed had to go back and relive their earlier phases. I hope it doesn’t mean that old hobby-horses are to be brought out and dusted off for one more ride.

The actual details don’t suggest anything like that. There are really four separate projects:

Mind: Develop a software model capable of understanding human social contexts- the signpost that establish these contexts, and the behaviors and conventions associated with them.
Research areas: hierarchical and reflective common sense
Lead researchers: Marvin Minsky, Patrick Winston

Memory: Further the study of data storage and knowledge representation in the brain; generalize the concept of memory for applicability outside embodied local actor context
Research areas: common sense
Lead researcher: Henry Lieberman

Brain and Intent: Study the embodiment of intent in neural systems. It incorporates wet laboratory and clinical components, as well as a mathematical modeling and representation component. Develop functional brain and neuron interfacing abilities. Use intent-based models to facilitate representation and exchange of information.
Research areas: wet computer, brain language, brain interfaces
Lead researchers: Newton Howard, Sebastian Seung, Ed Boyden

This all looks very interesting. The theory of reconfigurable asynchronous logic automata (RALA) represents a new approach to computation which instead of concealing the underlying physical operations behind high-level abstraction, makes the physical causality apparent: instead of physical units being represented in computer programs only as abstract symbols, RALA is based on a lattice of cells that asynchronously pass state tokens corresponding to physical resources. I’m not sure I really understand the implications of this – I’m accustomed to thinking that computation is computation whether done by electrons or fingers; but on the face of it there’s an interesting comparison with what some have said about consciousness requiring embodiment.

I imagine the work on Brain and Intent is to draw on earlier research into intention awareness. This seems to have been studied most extensively in a military context, but it bears on philosophical intentionality and theory of mind; in principle it seems to relate to some genuinely central and difficult issues. Reading brief details I get the sense of something which might be another blind alley, but is at least another alley.

Both of these projects seem rather new to me, not at all a matter of revisiting old problems from the history of AI, except in the loosest of senses.

In recent times within AI I think there has been a tendency to back off a bit from the issue of consciousness, and spend time instead on lesser but more achievable targets. Although the Mind Machine Project could be seen as superficially conforming with this trend, it seems evident to me that the researchers see their projects as heading towards full human cognition with all that that implies (perhaps robots that run off with your wife?)

Meanwhile in another part of the forest Paul Almond is setting out a pattern-based approach to AI. He’s only one man, compared with the might of MIT – but he does have the advantage of not having $5 million to delay his research…

Introspection, the direct examination of the contents of our own minds, seems itself to be in many minds at the moment. The latest issue of the Journal of Consciousness Studies was devoted to papers on introspection, marking the tenth anniversary of the publication of The View from Within, by Francisco Varela and Jonathan Shear (which was itself a special edition of the JCS); and now Eric Schwitzgebel has produced a new entry for the Stanford Encyclopedia of Philosophy.

The two accounts are of course quite different in some respects. The encyclopaedia entry is a careful, scholarly account, neutral and comprehensive; the JCS issue is openly a rallying-cry in support of a programme flowing from Varela’s work. This, it seems, called for an end to the ban on examination of lived experience; the JCS gives the impression that it was something of a milestone, though Schwitzgebel’s piece does not mention it (he does cite an earlier paper by Varela, once again in the JCS).

What’s all this about a ban? Well, back in the nineteenth century, psychologists had no fears about using introspective evidence; it was thought that a proper scientific effort would lead to an objectively verifiable kind of phenomenology. We should be able to classify the elements of mental experience and clarify how they worked together, just by examining what went on in our own heads. A great deal of work was done on all this (It was a great disappointments for me to discover, on first opening Brentano’s Psychology from an Empirical Standpoint, that it consisted almost entirely of this kind of thing, and that the only passage about intentional inexistence, the interesting issue, was the couple of paragraphs which I had already read as quotes in several other books.). There was a gradual refinement of the methods involved, leading on to the great heyday of introspectionism, with Wundt and Titchener in the lead. Unfortunately, it became clear that the rival schools of introspectionism had begun to come up with results which in some respects were radically different and incompatible, and since our own introspections are by their nature private and unverifiable, all they could really do by way of settling the issues was to shout at each other.

This embarrassing impasse led to a reaction away from introspection and to the rise of behaviourism, which not only denied the usefulness of examining our inner experience, but actually went to the extreme of denying that there was any such thing as inner experience. Behaviourism in its turn fell out of favour, but according to Varela there remained an instinctive distrust of introspection which continued to put people off it as an avenue of research. This is the ‘ban’ he wanted to see overturned.

Was there, is there, really a ban? Not exactly. Apart from the most dogmatic of the behaviourists, no-one has ever tried to exclude introspection altogether. In recent times, introspective evidence has been widely accepted – the problem of qualia, thought by some to be the problem of consciousness, depends entirely on introspection. I think the real problem arises when we adopt special methods. In order to obtain consistent results, the old introspectionists thought extensive training was necessary. It wasn’t enough to sit and think for a bit; you had to have mastered certain skills of discrimination and perception. The methodological dangers involved in teaching your researchers what kind of thing they could legitimately look for are clear.

Unfortunately, it seems to be very much this kind of programme which the JCS authors would like to resurrect – or rather, have resurrected, and wish to gain acceptance and support for. Once again we are going to need to learn how to introspect properly before our observations will be acceptable. What makes it worse for me is that the proposal seems to be tied up with NLP – Neuro-linguistic Programming. I don’t know a great deal about NLP: it seems to be a protean doctrine which shares with the Holy Roman Empire the property of not really being any of the three things in its name – but for me it does nothing to render another trip down this particular blind alley more attractive.

I don’t know about that, but aren’t they right to emphasise the potential value of introspection? Isn’t it the case that introspection is our only source of infallible information? Most of the things we perceive are subject to error and delusion, but we can’t, for example, be wrong about the fact that we are feeling pain, can we? That seems interesting to me. Our impressions of the outside world come to us through a chain of cause and effect, and at any stage errors or misinterpretations can creep in; but because introspection is direct, there’s no space for error to occur. You could well say it’s our only source of certain knowledge – isn’t that worth pursuing a little more systematically?

Infallible? That is the exact reverse of the truth: in fact all introspections are false. Think about it. Introspection can only address the contents of consciousness, right? You can’t introspect the unconscious mental processes that keep you balanced, or regulate your heartbeat. But all of the contents of consciousness have intentionality – they’re all about things, yes? So to have direct experience of mental content is to be thinking about something else – not about the mental state itself, but about the thing it’s about! Now when we attempt to think directly about our own mental states, it follows that we’re not experiencing them in themselves – we’re experiencing a different mental state which is about them. In short, we’re necessarily imagining our mental states. Far from having direct contact, we are inevitably thinking about something we’ve just made up.