Archive for October, 2015

Halloween is an appropriate time to talk about a potentially very scary topic: Possible future paths toward Artificial Intelligence.

All human and animal intelligence has evolved according to one principle: The fittest species survives. And this principle infuses every member of every species. Both collectively and as individuals, our most powerful instinct is to continue to stay alive.

So it stands to reason that as computers continue to increase in power, and artificial intelligence is therefore able to come ever closer to the level of richness and complexity that we associate with natural intelligence, there are at least two possible ways we can achieve “sentient” level AI.

One way is to evolve it the way natural intelligence has evolved: By some survival-optimizing fitness function. From a developmental perspective, this strategy has clear benefits.

For one thing, we know it works. All natural intelligence that we know of has come about as an optimization of survival fitness.

For another thing, it is a relatively easy path to success, compared to the alternatives. Genetic algorithms have the ability to optimize themselves. We don’t even need to completely know how they work in order to improve them. We just need to know how to iterate them.

The other way that AI can evolve is through explicit design. In this scenario, we figure out over time how to construct sentient level AI without recourse to a self-evolving survival optimization strategy.

This second path is much more difficult, because it requires a much greater level of explicit modeling. But it also has one very useful advantage.

Any AI that develops through survival optimizing iteration will probably value its own existence above anything else. And that includes us. If something goes wrong, that could be scary. We’re talking Skynet level scary.

Whereas AI that has not gone through this development process won’t have any intrinsic motivations. It will just be one more machine for us to use. And it won’t care if we switch it off, any more than a light switch cares if we switch it off.

I was inspired by the heroic undertaking of Boyhood to think about other dramatic uses for “extreme long form” production. As you may know, Richard Linklater filmed Boyhood over the course of 12 years. In the final film, we literally see actors grow up or grow old before our eyes.

In the last few years, many films and TV shows have dabbled in the concept of multiple parallel timelines. Groundhog Day, Sliding Doors, Fringe, The Butterfly Effect, Dr. Who, Time Cop, Source Code, Looper, The Man in the High Castle, these are but a few of many offerings based on the premise of multiple alternate realities proceeding in parallel.

Suppose we were to start a twelve year long film production with the express purpose of capturing all of those parallel realities as they progress? The result could be something truly new.

Imagine tuning into your favorite TV show every week and seeing the same actors progress through different multi-year narratives. One week they would be growing up or growing old in a comic universe. The next week in a thriller, or a horror story, or a RomCom.

You would see the same actors literally going through a large chunk of their lives every week. Except in each episode they would be living different lives.

Of course there would be a certain amount of risk involved. For example, if one of the actors were to die before the 12 years are up, their disappearance (and possible replacement) would be reflected simultaneously in every fictional parallel universe. And maybe that would be ok.

I would love to try something like this. Or perhaps, in another life, I already have. 😉

The first time I saw the computer animated film Final Fantasy, the Spirits Within, when it came out in 2001, was on a night I was supposed to be going on a first date. Well, sort of.

I thought that was the night of our date, but I had gotten the day wrong. Our date was actually the following night.

Since this was in the pre-cellphone days, I wrongly assumed that my date had stood me up, so I bought myself a ticket and went to see the movie by myself.

And I hated it. I hated everything about it. In fact, I was appalled by it. The story made no sense and the computer animated characters were deep into the uncanny valley. For all the money up there on screen, it felt like a complete fail.

The following day I realized my scheduling mistake. In the end, I never told my date that I had already seen the movie. I just went ahead and watching it again with her, as though nothing had happened.

And this time, I really liked it! Because I’d already seen the film, I could now easily follow the convoluted plot. And now I knew to just ignore the clumsy character animation. Instead I focused on the beautiful backgrounds, and found myself on a spectacular ride through a fun and inventive world, filled with endless visual delights.

I don’t think it would have been possible for me to have had that experience the first time. And I don’t think I ever would have seen this film again, had fate not intervened.

It makes me wonder — how many wonderful experiences have I missed because I thought it’s only the first time that counts?

I was watching Jaron Lanier play jazz piano with a small ensemble recently, and it occurred to me that in his freeform, shambling, yet artful way, he was radiating the same energy that suffuses his talks about technology: A kind of extreme casual intelligence, seemingly spur of the moment but actually the product of years of thought and contemplation.

And suddenly I decided that when I give talks about the future, I need to be improvising on a piano keyboard. The words I speak about virtual reality, cyber-connections, neural implants, should be complemented by freeform improvised jazz.

The hybrid format I’m contemplating probably breaks at least twenty different rules that keep CP Snow’s two cultures safely apart. Which means I will probably piss off quite a few people. On the other hand, I think Bob Dylan was totally on target in Newport in 1965 (you could look it up).

So today I tweaked my Chalktalk program, the one I’ve been using for all my recent teaching and presentations. Normally, when I use Chalktalk, I sketch as I talk, and those sketches then turn into animated ideas and creatures, which act out whatever topic I’m talking about.

But today I added a new feature: The animated creatures can appear in response to certain chords I play on my (midi) piano keyboard. Now, in addition to puppetry by drawing, puppetry by music.

So I guess I’m already working on that talk Jaron inspired. And if I do it right, the visual ideas that show up and move about on screen will appear to flow naturally not just from the words, but also from the music.

This week a friend told me that somebody she knows is writing a book about the history of cinnamon. That seems like a great idea for a book, because it creates an opportunity to talk about so many interesting topics, from cuisine to culture to capitalism to colonialism.

But then I got to thinking. What if — just maybe — there was some sort of miscommunication?

Publisher:

We’d like you to write a history of Cinema. We’re offering a $100K advance.

Author:

Hmm, that’s an interesting topic. Are you sure people will want to read about something so common?

Publisher:

Oh yes, it’s part of people’s everyday lives, isn’t it?

Author:

Yes … I guess so. Well, ok, I’m not going to turn down such a generous cash advance. I’ll see what I can do.

… some months later …

Publisher:

How’s the book coming?

Author:

You were right — this is a fascinating topic.

Publisher:

So you’re managing to cover fresh ground?

Author:

Well, not necessarily fresh ground, but definitely spice.

Publisher:

Spice is good. Readers like that. But make sure it’s in good taste.

Author:

Oh, very good taste. If you add the right ingredients to the mix.

Publisher:

Sounds wonderful! When can we expect a draft?

The author was so heartened by his publisher’s unexpected enthusiasm that he started work on a sequel, completely on spec: A Brief History of Thyme.

Today, for a virtual reality project we are doing, a student asked me if there was a good way to arrange dots around a sphere in a nice random way. This student isn’t a math or computer science student, but he has been taking my computer graphics class, where we focus on how math can describe visual things.

So I felt confident that I could just describe to him, in a few words, a good approach: Instead of picking dots on a sphere, pick dots inside the cube that surrounds the sphere. This is easier, because you can just pick the dot’s x, y and z coordinates independently.

Then if any dot you pick falls outside the sphere, just throw it out and try again. So now you’ve got a collection of random dots that happen to all be inside the sphere. Now all you have to do is push all those points out to the sphere’s surface, and you’re done.

The cool thing was that this was all I needed to tell him. He totally understood it, got why it worked, and started coding it then and there. The other eight or so students around the table also got it, and none of them were math or computer science majors.

And they all seemed very interested when I said that the technique I’d just described was a well known technique in math called the “Monte Carlo method”. It’s called that because you basically keep rolling the dice like you’re in a gambling casino, but then you get to decide which rolls of the dice you want to keep.

I love the fact that a group of students who think of themselves as artists, animators and designers are comfortable with this way of thinking about visual things. “Making pictures with math” may just be catching on.

I was having a delightful discussion today with one of my Ph.D. students. The topic was some exciting new mathematical ideas we are working on to in our computer graphics research.

Part of me was completely immersed in the conversation. But another part of me was listening to the rhythm and flow of it, as a sort of fly on the wall. And I realized that my student and I are so absorbed in this research, we describe things to each other in a way that a third person would have had a very difficult time following.

It’s not that the concepts are so radically difficult. It’s more that we have developed our own shared language for describing the mathematical pictures in our minds, and discussing ways that we might play with those ideas and try out different possibilities.

Any other reasonably thoughtful person could indeed be brought up to speed on what we were saying. But they would probably need to learn some version of this shared language. And that all by itself would take time and effort.

So much of shared understanding can pivot on shared language. If you can’t properly express to eath other the thoughts and images in your mind, then you can’t explore those ideas together — you can’t go on exciting journeys with each other.

Of course a lot of this comes down to motivation. My student and I are both passionate about what we consider to be beautiful ideas in mathematics and computer graphics. I am sure that something similar transpires between two jazz musicians using a verbal shorthand that they have developed over time to discuss cool musical ideas.

It’s not that a third person couldn’t learn their jazz language. It’s more that the person might not want to. Practically speaking, the ability to learn a particular shared language of ideas requires an inherent love of those ideas.

And so we develop languages that bind us to our respective tribes, whether they be tribes of sports, medicine, war, politics, music or computer graphics. We recognize the people who share a tribe because those people have put in the effort to learn our tribal language.

I’ve been noticing that a lot of the shared “virtual” reality experiences in my own research are highly asymmetric. Our team seems to be going for maximum disruption of recieved notions of shared space.

In our experiments, one avatar might be tiny and another huge, one appearing as a realistic human and another as a glowing ball of light. We are creating social experiences between people that place them in radically different vantage points.

One would think that trying to keep things as symmetric as possible would be the name of the game. But somehow that seems like a cop out, a reduction to the obvious. Distance lends value to proximity, and difference gives power to connection.

After all, isn’t our “common viewpoint” merely a well-learned illusion? No two human beings have ever had the same literal experience of reality, and nobody, other than you, has seen the unique sequence of images that form your particular visual life experience.

Yet you accept, without hesitation, that you share the same reality with other people who have never seen the images that have flowed into your eyes and your brain.

So why shouldn’t we explore radically different subjective experience? After all, isn’t that an apt metaphor for our actual experience of so called “real life”?