July 30, 2007

When Dvorsky met Minsky

Of all the celebrities and bigwigs I looked forward to meeting at TransVision 2007 there was only one person who I was truly nervous about running into – a person who gave me that 'I’m going to squeal like a little girl when I see him’ kind of feeling.

A friend cautioned me by claiming that he was a difficult man and not very approachable. I dismissed the warning and patiently waited for an opportunity to start a conversation with him.

I eventually got my chance. I was with two other friends when the three of us bumped into Minsky in the reception area of the conference hall. Without hesitation I approached and introduced myself. After we shook hands I told him how much I appreciated his work and how much of an honour it was for me to finally meet him. He nodded his head and didn’t say a word.

Working to move the conversation along, I told him that while I was conducting research for my presentation I discovered that he was a presenter at the seminal SETI conference in 1971 in Byurakan. Minsky made waves at that conference by having the audacity to suggest that advanced extraterrestrial civilizations would likely be comprised of machine minds. It was a controversial suggestion, one that has only come into acceptance in more recent times. I asked Minsky for a first-hand account of how his idea was received back in 1971.

He stood there, just blankly looking at me, and didn’t say a single word. We all waited in silence for what seemed an eternity. I got the distinct impression that he was thoroughly disinterested in our little group.

Being a sucker for punishment I decided to move the conversation along. I unabashedly gave him the 10 second executive summary of my TV07 presentation, where I make some claims about the limitations of extraterrestrial civilizations and how this might account for the Great Silence and the problem that is the Fermi Paradox.

This finally got Minsky going. He had attended a SETI conference two weeks prior and was impressed with what he heard there. Minsky suggested that the reason we don’t see any signs of obvious megascale engineering or cosmological re-tuning by advanced ETI’s is that they have no sense of urgency to embark upon such projects. He argued that advanced intelligences won’t engage in these sorts of Universe changing exercises until the very late stages of the cosmos.

Jeez, I thought to myself, I hadn't considered that.

Leave it to Marvin Minsky to give me some serious food for thought a mere two hours before I was to give my talk. I was suddenly worried that this consideration would pierce a glaring hole in my argument.

After another minute of idle chit-chat I excused myself from Minsky's company and found a little corner where I could have my little micro-panic and contemplate his little theory.

The more I thought about it, however, the more unsatisfied I became with his answer; virtually everyone has a rather smug solution to the Fermi Paradox, and Marvin Minsky is no exception. Specifically, I was concerned with how such a theory could be exclusive to all civilizations. It seemed implausible to believe that not even one renegade civilization would take it upon itself to change the rules of the cosmos if it had the capacity to do so.

Moreover, given the power to reshape the Universe, a strong case could be made that a meta-ethical imperative exists to turn the madness that is existence into something profoundly more meaningful and safer. As Slavoj Žižek once said, existence is a catastrophe of the highest order. Timothy Leary described the Universe as an "ocean of chaos."

Waiting until the last minute to create a cosmological paradise (assuming such a thing is even possible) would seem to be both exceptionally risky and irresponsible -- not just to the members of a civilization capable of such feats, but to the larger universal community itself.

Phew. That's right, that's the answer. Ha, take that, Minsky!

So, after rationalizing a counter-argument to Minsky's suggestion, I was able to calm down and prepare myself for my presentation and deal with any follow-up questions that could be thrown my way.

And that's how I met Marvin Minsky.

Sure, he's not the most personable man I've ever met, but I got the sense that he's at a time in his life where a) he knows he owes nothing to no one and b) he'd rather engage with people who can contribute to his life's work and his ongoing struggle to solve the problem that is human cognition. And he's still as sharp as they come.

7 comments:

I find the debates around the Fermi Paradox fascinating, as it's a wonderful example of trying to solve a problem using only logic. (My pet smug solution, btw, is that the reason we haven't heard anything is that we're listening on a disused medium -- clear radio is only of use for a short period before civilizations drop into spread-spectrum or something Not Yet Discovered.)

This comment, though, caught me up short:

...given the power to reshape the Universe, a strong case could be made that a meta-ethical imperative exists to turn the madness that is existence into something profoundly more meaningful and safer.

The assumption here seems to be that, given the physical capability to make stellar and galactic-scale modifications, a civilization would also have the cognitive capability to understand the complex interactions of materials and forces at that scale, so that what looked like a good idea at the time doesn't lead to crashing disasters (you wondered where quasars came from, right?).

Given that we have as our single case study a contrary bit of evidence -- that is, that humankind has more than enough physical capacity to make major geo-modification, but in no way can humans yet predict with anything close to enough certainty the complexity of results -- I'd tend to be skeptical of that assumption. Moreover, I'd argue that a civilization that has managed to survive to the point of being able to make stellar/galactic mods will have lived through enough near-disasters to recognize when they don't know enough to undertake a project of that magnitude.

If so, Minsky may be right in that, only at the putative end of the world (star system, galaxy, universe) would the risks of fiddling with things be worth taking.

I saw Minsky at a University talk on AI back about 25 years ago. I had the impression from how he was giving his talk that his personality was the same as what you recently experienced. You could probably ask others who have known and interacted with him over the whole span of years but I think unless you time travel back to before he became an adult or teenager you would end up experiencing the same brilliant but not great on social niceties person.

Re: the 'disused medium' hypothesis, if an ETI wanted to communicate with us they would have done so by now through the use of Bracewell probes. There's been enough time to saturate the galaxy with these probes, which could start to send powerful radio signals to us after being alerted by our radio signals. We can be fairly confident that no Bracewell resides within about a 30 to 50 ly radius from Earth.

As for your second point -- what we'll hereafter call Cascio's Cosmic Precautionary Principle -- that can also be challenged. I would assume that an advanced ETI could run simulations to see if their engineering strategies could work. They would then reach one of two conclusions: they can either do it or they can’t. If yes, then re-engineer immediately, if not, maximize the optimal mode of existence and slow the clock speed down to the slowest possible rate until the Great Rip happens.

It's either a chess or tic-tac-toe universe; we're not sure which one we reside in – we don’t know whether or not we can really remake this thing or if it's really just a simple and stupid universe that can't really be tweaked.

See, that's the thing: it may not be possible to construct a simulation of sufficient complexity to model all threatening outcomes -- and threatening outcomes, when working at the scale of stellar/galactic engineering, can be *really* threatening. Or, to put it another way, given that the simulation would need to be able to capture the full complexity of the universe (if not the full scale) and run it faster than real-time, it may be simpler just to engineer a new universe that fit the necessary requirements for the ETI, and move there.

George Dvorsky

Canadian futurist, science writer, and ethicist, George Dvorsky has written and spoken extensively about the impacts of cutting-edge science and technology—particularly as they pertain to the improvement of human performance and experience. He is a contributing editor at io9, the Chairman of the Board at the Institute for Ethics and Emerging Technologies and is the program director for the Rights of Non-Human Persons program.