Sunday, April 24, 2011

I often feel a need, even an urgency, for a smart phone. Usually it comes up in situations when it would be impolite, impractical or too conspicuous to break out a laptop (such as standing in line in a store -- can't type very well cradling the laptop with one arm, plus there's no WiFi), but a smartphone would be socially acceptable, especially as everyone else is playing with them. Those 5-10 minute snippets would be enough for me to write another paragraph of a story, a blog post, or a Facebook comment. But that would depend on a good keyboard. The matter of keyboard is what kept me in analysis-paralysis stage about a smart phone purchase. Ever since my AT&T Tilt, a Windows phone, died 2-3 years ago, I've been at loss of what to get next. Not because I miss it, but because I saw how unsatisfying a phone can be even when it looks great "on paper". It had a slide-out keyboard, but the keys were so tiny and what's worse, so closely spaced (there were no gaps between them), that typing was near impossible. If the keys are further apart (even if they are tiny), it is easier to type. Such is the keyboard on G2 (a.k.a the Google phone), which is high on my candidates list. But lately I also tried some touch screen keyboards, and they are not so bad, especially on phones with larger screens. But how would I know which of them is best for my purposes? Unlike most smart phone users, I intend to do quite a bit of writing on those keyboards. "Playing around" with a device does not give you a good idea what it's like to type on it for longer periods of time. So what is the solution? To buy one and discover that it doesn't work for me? Physical-vs-onscreen-keyboard is one of my dilemmas.

Another issue is, should I wait until mid-July when I'll be eligible for an upgrade at AT&T, my current carrier, which will enable me to buy a smartphone for a fraction of a price? Or should I buy a phone at full price? They can be damn expensive. Or should I switch to another carrier, such as T-Mobile, which is the only one that has G2? Then I could buy a G2 at a discount with a 2-year contract. But my monthly data plan would cost more than if I bought it at full price, without a contract. So after 2 years, buying it at full price would have paid off. And if I stay with AT&T, will it turn out that the only kind of phone I can get cheaply is a refurbished one? My Tilt was refurbished, and it died after 10 months, long past its warranty. So if another refurbished phone dies on me, I will need a new phone again, and won't qualify for an upgrade.

All of this gives me so much headache that I throw up my hands and give up.

Monday, April 11, 2011

Will supercomputing intelligences outsmart human-level intelligence? "The Singularity: Humanity's Huge Techno Challenge" panel claimed to dissect the very core of the Singularity, if and when it will occur, and what we can expect to happen. The question was debated by Doug Lenat, founder of an artificial intelligence project CYC, Michael Vassar, president of the Singularity Institute for Artificial Intelligence, and Natasha Vita-More, vice chair of Humanity +.

Technological Singularity is considered to be a hypothetical event occurring when technological progress becomes so rapid that it makes the future impossible to predict. It is commonly thought that such event would happen if superhuman intelligence was created. For starters, Doug Lenat gave an overview of possible scenarios of how technological singularity would happen, or why it wouldn't happen. He lists these forces driving us towards creation of superhuman intelligence: demand for competitive, cutting edge software applications (commercial and government); demand for personal assistants, such as SIRI, but enhanced; demand for "smarter" AI in games; mass vetting of errorful learned knowledge, such as in Wikipedia. And the forces that may preclude Singularity? Large enterprises can stay on top in other ways than being technologically competitive; humans, too, may be satisfied with bread and circuits, immersing themselves in games to distract them from pressing realities. Also, Singularity may not happen if some event or trend kills all the advanced technology: an energy crisis, neo-luddite backlash, or AI's merciful suicide (say, AI realizes it's a threat to humanity, and kills itself). Then there are pick-your-favorite doomsday scenarios, such as grey goo, wherein nanobots multiplying out of control munch up all the matter on Earth.

Doug Lenat speaks about forces pushing us towards Singularity. More pictures from SXSW 2011 are in my photo gallery.

Which is more likely -- that the Singularity will happen, or that some forces will prevent it from happening? How dangerous will it be for us, humans? How compatible it will be with our continuing existence?

As one would expect from a president of Singularity institute, Michael Vassar seems to think Singularity is likely, and that we would get there much sooner if we planned technology more deliberately than we do. "The more you study history, the more you'll see that we don't do very much deliberation. And the little that we do, really goes a very long way," he says. For millenia, technology was evolving in a random, unplanned way, similar to biological evolution. About 300 years ago humans started thinking more deliberately. (I don't know where Vassar gets this number -- Industrial Revolution started 200 rather than 300 years ago.) Automating the kind of human thought that can be well performed by machines, and combining it with the kind of thought that's not easy to automate, may lead us to a very rapid technological acceleration. But to close the gap between machine and human intelligence, we need to build a very good understanding of human intelligence. At some point in history humanity discovered scientific method, which is a very rudimentary understanding of how reasoning works. It allowed us to build institutions that will shape the future, the way no other institutions have been able to, says Vassar.

As to us being able to control whether nonhuman superintelligences will help us or cause our extinction, Vassar is not too optimistic. "Ray Kurzweil thinks we can get emerging superhuman intelligences to slow down. But we, humans, don't have a good track record of getting potentially dangerous trends to slow down."

In every panel on Singularity, you'll get people who understand that Singularity may happen entirely without the humans' control, and then you'll get those who view Singularity only as a tool for progress, especially social progress, and have no interest in it otherwise. This was the case, for example, at the Singularity panel at ArmadilloCon 2003, when one writer said, if Singularity isn't going to enforce social justice, it's not going to happen. I got an impression that Natasha Vita-More is in the second camp. She spoke about how advancing technologies need to solve aging, healthcare, and social problems, especially those that still needlessly exist in the third world, as if technology will only do what we need it to do. She did not address the possibility that Singularity might take off without our control or influence.

She started by saying: "The Singularity is presumed to be an event that happens to us rather than an opportunity to boost human cognitive abilities. The very same technology that proposes to build superintelligences could also dramatically enhance human cognition. Rather than looking at the Singularity as a fait accompli birthing of superintelligences that might foster human extinction risk, an alternative theory forms an intervention between human and technology. [...] The Singularity needs smart design to solve problems." According to her, humans would achieve that by "evolving at the speed of technology", in other words, cyborgizing themselves.

Humans may have to deliberately redesign their brains and bodies to keep up or merge with the machines, but it still does not preclude the chance that Singularity might not come about by our design. If nonhuman superintelligences evolve, what incentive would they have to merge with humans? Why carry around flesh bodies, even engineered with excessive strength, resilience, or longevity? I'm reminded of what Bruce Sterling said on another occasion about trying to fit new technology into a conceptual framework of old technology: it would be like putting a papier-mache horse head on the hood of your car.

Doug Lenat disagrees that integration of our physical bodies with machines is necessary or sufficient for Singularity to happen. He would focus on not dramatic cyborgization, but just the information technology. Having information processing apliances that amplify our brain power would change us the same way that 100 years ago electrical devices amplifed our muscles. We travelled farther than our legs would carry us, we communicated farther than we could shout -- it changed our lives in fundamental ways and never changed back. Approaching Singularity, we'll see appliances amplifying our minds the same way. The society will amplify as well, become smarter in general, and will be able to solve the problems that Natasha Vita-More was talking about. At the same time, he doesn't think technology is a panacea for that. "When technology automated a number of things that were done manually before, social stratification only increased."

Michael Vassar goes even further: "We have technologies to solve most social problems today. But what we don't have is ability to engage ourselves in solving the problems we don't care about."

Somebody in the audience asked: "do you think a consciousness that exists outside human body (e.g. in a machine) can be spontaneously generated?" Michael Vassar replied: "I don't know what you mean by spontaneously generated, but I think, not likely. Consciousness would not be generated without a great deal of design." Doug Lenat thought this question was too vague. In a limited sense of consciousness, programs are conscious. You can interrogate CYC (Lenat's AI project) programs about their goals or methods, so they do have some self-reflection built into them. But it's probably nothing like what a human observer would perceive as consciousness. To answer this question, a better definition of consciousness is needed.

Also, in the future we will each have many avatars doing many different things, says Doug Lenat. Mental aids will direct our attention to where it's most needed at the moment. In that sense, each person's consciousness will exist everywhere.

Another question from the audience. "To be truly creative, you have to unplug yourself from technology often enough. So how would uploaded brains do that? Would inability to do that kill their creativity?"

Michael Vassar. "If I was an uploaded or enhanced being, I would be able to unplug myself much better. I would not only unplug from my laptop or the internet, but even from my visual cortex."

And here is another take on Singularity, where the original popularizer of the concept, Vernor Vinge, discusses the concept with several science fiction writers.

Friday, April 01, 2011

One of my most anticipated SXSW 2011 panels was "Social Media is Science Fiction". What do science fiction stories tell us about how social networking and user-generated content will evolve? How it will affect us as a civilization? These topics were debated by Annalee Newitz and Charlie Jane Anders of the futurist magazine io9.com, Matt Thompson from NPR, science fiction writer Maureen McHugh, and artist Molly Crabapple.

Social media, especially Facebook, pushes us to maintain a single, solid avatar, says Annalee Newitz. (Interestingly, the panelists used the word "avatar" to mean not just userpics, but entire digital selves, or digital personas.) It wants us to expose all the different aspects of our lives, and to consolidate them into one. For the first time in human society we are seeing a new man, who has to be authentic, who has to be the same person in every context: as a worker, as a "john", as a father or as a child. Maybe this authenticity is good, it's pushing us not to be hypocrites, says Newitz. But also it's making us more and more of an open book, vulnerable in new ways. Molly Crabapple says: "A delusion we have is that only cool people will read our updates. At 13, I genuinely believed that my updates will be read only by sympathetic audience." The reality is, says Newitz, that you might tweet "I just got my period", and you'll get an ad from Tampax. (She says she marks all Facebook ads as offensive.)

Controlling the image we present to the world on social networks is becoming more difficult. A teenager may carefully pose in a mirror to make sure their self-portrait looks just right, but Google, already on the way to becoming an AI, may outrank it with more candid shots of him or her. It is as if, after all that posing, you walk by a shop window, and catch a glimpse of yourself, and think "OMG, do I look like that?" says Maureen McHugh.

If you don't control your own "avatar", who does? If you interact with another person, does the content of that interaction equally belong to both people to record, tweet, post, and make as public as either of them wishes? Do you reserve a right to make just your half public? But since that's usually impossible without revealing too much about the other party, how should this be resolved? Back in the day when most conversations and interactions left no record, it was not a question. A society where everything is recorded requires new rules and protocols. This is already being addressed by science fiction. Annalee Newitz recommends a novel "Quantum Thief" by Hannu Rajaniemi, a far future fantasy where everybody has an organ in their brain that's a privacy negotiator. Such a negotiation may determine, for example, what information you will remember after a meeting with somebody.

Matt Thompson observes that social media is increasingly about memory. He would like to see social media serve as a storage component of the society. All the tweets, pictures and videos could provide an incredibly detailed look at our society for future historians. (If that data is appropriately organized and mined, I might add.)

Has social media changed storytelling?

This was a question from the audience to Maureen McHugh, who is a science fiction writer that also writes for an alternate reality game company, Fourth Wall Studios. McHugh says that in the last 10 years she has discovered that traditional story still works very well, and interactive storytelling doesn't make it better. Hamlet does not become a better play if the audience gets to select the ending. Audiences want characters to be either happy or punished, and sometimes the story isn't good if the characters are happy or punished too soon. Charlie Jane Anders adds that she allowed the audiences to select a story ending twice. She tweeted, which story ending do you like better? And she chose the ending they selected.

By the sound of it, the panelists are more inclined to be grim than optimistic about social media-shaped future. Molly Crabapple observed that the recent trend of gamification may restructure future workplace in such a way that it would require us participate in game-like challenges with no pay -- the "fun" we'll be having will be its own reward. "You will go to Walmart and participate in the box-lifting challenge. You'll see who can lift and stack the most boxes and the prize will be what used to be your salary." (Thanks to Dale Roe for that quote. His own take on this panel can be found in this Austin American-Statesman blog post.)

Maureen McHugh. I keep in touch with my kid via social media every day.

Molly Crabapple. I founded a company with no money that has branches in 28 cities, because of social media.

Matt Thompson. Craigslist is a miracle. It has helped my life in giant, immeasurable ways, including that this is where I found my partner. There are opportunities for connection, and for us to step outside our worlds, that we are only just beginning to appreciate.

Annalee Newitz. I work at home alone with my cats, so for me Twitter is like being at work, it's my social contact. I sit there and have all those conversations via Twitter.

Links

About Me

A geekess by profession and personality. Torn in many different directions: programming, writing fiction and nonfiction, publishing, blogging. I blog about about science fiction (not the Star Trek kind, but the "thought experiment" kind), science, technology (mostly Austin, TX tech events), and freethought, among other things. My "official" blog, SFragments, contains in-depth articles on various topics discussed at science fiction conventions and author events; this one is more personal and covers a wider range of topics, including technology events in Austin, TX, startups, applications, and technology.