To follow this blog by email, give your address here...

Sunday, December 02, 2012

I just gave a talk, via Skype from Hong Kong, at the Humanity+ San Francisco conference…. Here are some notes I wrote before the talk, basically summarizing what I said in the talk (though of course, in the talk I ended up phrasing many things a bit differently...).

I'm going to talk a bit about language, and how it relates to mind and reality … and about what may come AFTER language as we know it, when mind and reality change dramatically due to radical technological advances

Language is, obviously, one of the main things distinguishing humans from other animals. Dogs and apes and so forth, they do have their own languages, which do have their own kinds of sophistication -- but these animal languages seem to be lacking in some of the subtler aspects of human languages. They don't have the recursive phrase structure that lets us construct and communicate complex conceptual structures.

Dolphins and whales may have languages as sophisticated as ours -- we really don't know -- but if so their language may be very different. Their language may have to do with continuous wave-forms rather than discrete entities like words, letters and sentences. Continuous communication may be better in some ways -- I can imagine it being better for conveying emotion, just as for us humans, tone and gesture can be better at conveying emotion than words are. Yet, our discrete, chunky human language seems to match naturally with our human cognitive propensity to break things down into parts, and with our practical ability to build stuff out of parts, using tools.

I've often imagined the cavemen who first invented language, sitting around in their cave speculating and worrying about the future changes their invention might cause. Maybe they wondered whether language would be a good thing after all -- whether it would somehow mess up their wonderful caveman way of life. Maybe these visionary cavemen foresaw the way language would enable more complex social structures, and better passage of knowledge from generation to generation. But I doubt these clever cavement foresaw Shakespeare, William Burroughs, Youtube comment spam, differential calculus, mathematical logic or C++ …. I suppose we are in a similar position to these hypothetical cavemen when we speculate about the future situations our current technologies might lead to. We can see a small distance into the future, but after that, things are going to happen that we utterly lack the capability to comprehend…

The question I want to pose now is: What comes after language? What's the next change in communication?

My suggestion is simple but radical: In the future, the distinction between linguistic utterances and minds is going to dissolve.

In the not too distant future, a linguistic utterance is simply going to be a MIND with a particular sort of cognitive focus and bias.

I came up with this idea in the course of my work on the OpenCog AI system. OpenCog is an open-source software system that a number of us are building, with the goal of eventually turning it into an artificial general intelligence system with capability at the human level and beyond. We're using it to control intelligent video game characters, and next year we'll be working with David Hanson to use it to control humanoid robots.

What happens when two OpenCog systems want to communicate with each other? They don't need to communicate using words and sentences and so forth. They can just exchange chunks of mind directly. They can exchange semantic graphs -- networks of nodes and links, whose labels and whose patterns of connectivity represent ideas.

But you can't just take a chunk of one guy's mind, and stick it into another guy's mind. When you're merging a semantic graph from one mind, into another mind, some translation is required -- because different minds will tend to organize knowledge differently. There are various ways to handle this.

One way is to create a sort of "standard reference mind" -- so that, when mind A wants to communicate with mind B, it first expresses its idiosyncratic concepts in terms of the concepts of the standard reference mind. This is a scheme I invented in the late 1990s -- I called it "Psy-nese." A standard reference mind is sort of like a language, but without so much mess. It doesn't require thoughts to be linearized into sequences of symbols. It just standardizes the nodes and links in semantic graphs used for communication.

But Psynese is a fairly blunt instrument. Wouldn't it be better if a semantic graph created by mind A, had the savvy to figure out how to translate itself into a form comprehensible by mind B? What if a linguistic utterance contained, not only a set of ideas created by the sender, but the cognitive capability to morph itself into a form comprehensible by the recipient? This is weird relative to how language currently works, but it's a perfectly sensible design pattern…

That's my best guess at what comes after language. Impromptu minds, synthesized on the fly, with the goals of translating particular networks of thought into the internal languages of various recipients.

If I really stretch my brain, I can dimly imagine what such a system of thought and communication would be like. It would weave together a group of minds into an interesting kind of global brain. But we can't foresee the particulars of what this kind of communication would lead to, any more than a bunch of cavemen could foresee Henry Miller, reddit or loop quantum gravity.

Finally, I'll pose you one more question, which I'm not going to answer for you. How can we write about the future NOW, in a way that starts to move toward a future in which linguistic utterances and minds are the same thing?