June 29, 2010

I've often thought that cryonics, the practice of storing tissue (namely the brain) in a vat of liquid nitrogen, may eventually come to be seen as a rather primitive and naive technique for preservation. While it may be the only current option for those hoping to capture and restore their brain states for future reanimation, cryonics as a concept may not stand the test of time. More sophisticated methods have already been proposed, including warm biostasis and plastination.

While warm biostasis remains a largely theoretical endeavor, brain plastination was recently given a considerable boost through the founding of the Brain Preservation Foundation. Launched by Accelerating Studies Foundation founder John Smart and Harvard neuroscientist Ken Hayworth, the BPF is seeking to facilitate the development of any technology that will effectively preserve the brain for eventual reanimation. While the foundation members' pet interest is in plastination, they are not married to any particular technique. As far as they're concerned, the successful development of any kind of brain preservation technology means that everyone wins.

To this end, the Foundation has launched the Brain Preservation Technology Prize – a prize for demonstrating ultrastructure preservation across an entire large mammalian brain and verified by a comprehensive electron microscopic survey procedure. Think of it as an X-Prize for brain preservation technology. The Foundation wants to encourage researchers to develop techniques “capable of inexpensively and completely preserving an entire human brain for long-term storage with such fidelity that the structure of every neuronal process and every synaptic connection remains intact and traceable using today’s electron microscopic imaging techniques.”

The current purse is for $100,000, but they expect this prize amount to increase as donors chip-in. And in anticipation of success, the BPF has created a Brain Preservation Bill of Rights.

As noted, the BPF has a special interest in brain plastination, mostly on account of Smart and Hayworth's extensive work in this field. If you've ever seen seen a Body Worlds exhibit, then you know about plastination. It is thought that brain-state may be preserved through the chemical conversion of brain matter into a non-degradable substrate, which is why the proposed technique is also referred to as chemical brain preservation. For example, it might be possible to flood a brain shortly after death with glutaraldehyde to fix proteins, followed by osmium tetroxide to stabilize lipids and other compounds. Essentially, this process could turn a deceased brain into a chunk of plastic that will last indefinitely.

Smart envisions the day when this technology is refined and streamlined to the point where preservation may cost as little as $2,000. Not a bad price for a radically extended life.

I recently had the opportunity to speak with Smart and Hayworth about their project at the Humanity+ Summit that was held in early June. A few of us conference attendees were given an informal guided tour of Hayworth's lab at Harvard. It is here where Hayworth uses electron microscopy to delineate every synaptic connection from plastinated mouse brains, a process that preserves both structure and molecular level information. Essentially, while they're working on technologies to preserve, image, and analyse mouse brains--an essential step toward a whole mouse brain connectome--Hayworth and his team are developing the theory and technologies that will be required to preserve human brains as well.

The tour of Hayworth's lab was jaw dropping on many levels. Not only did I get a chance to see slides of brains at the nanometer scale, I got a chance to see real researchers doing real work in a real lab. It's transhumanism under construction; this wasn't airy-fairy armchair futuristic fantasy - this research is actually happening.

Hayworth noted that, "Over the next decade or two these or other techniques will be developed and will allow the creation of a synapse-level atlas of the entire human brain - something that has been dubbed the 'human connectome'." As for mind uploading from a plastic embedded brain, Hayworth believes that's about 50 years off.

Make a donation to the Brain Preservation Foundation today. Your life may depend on it.

Global Politician columnist Sam Vaknin argues in a recent article that science fiction is guilty of ten specific mistakes when postulating the characteristics of advanced extraterrestrial life. Specifically, he contends that sci-fi writers consistently buy into fallacies about:

Life in the universe

The concept of structure

Communication and interaction

Location

Separateness

Transportation

Will and intention

Intelligence

Artificial vs. natural

Leadership

While the article certainly raises some food for thought, Vaknin's call for writers to think more 'outside of the box' is a bit of a stretch, if not condescending. Science fiction writers, for the most part, take great pains to weave a coherent narrative around novel imaginings of what ETIs might look like. Moreover, Vaknin is himself guilty of considerably hand-waving, arguing that ETIs may be existentially and qualitatively of a different sort than what we might expect, but at the same time he doesn't provide any substantive or compelling evidence for us to believe otherwise.

Sure, I agree that ETIs may be dramatically different than what we can imagine and that they may exist outside of expected paradigms, but until our exoscience matures we should probably err on the side of the self-sampling assumption and figure that the ignition and evolution of life tends to follow a similar path to the one taken on Earth. Now, I'm not suggesting that we refrain from hypothesizing about radically different existence-states; I'm just saying that these sorts of extraordinary claims (like alternative intelligences spawning different quantum realities) require the requisite evidence. It's far too easy to fantasize about some kind of energy-based hive-mind living in the core of asteroids, it's another thing to prove that such a thing could come about through the laws of physics [my example, not Vaknin's].

In the article, Vaknin also posits six basic explanations to the Fermi Paradox (and the apparent failure of SETI) that are not mutually exclusive:

That Aliens do not exist

That the technology they use is far too advanced to be detected by us and, the flip side of this hypothesis, that the technology we use is insufficiently advanced to be noticed by them

That we are looking for extraterrestrials at the wrong places

That the Aliens are life forms so different to us that we fail to recognize them as sentient beings or to communicate with them

That Aliens are trying to communicate with us but constantly fail due to a variety of hindrances, some structural and some circumstantial

That they are avoiding us because of our misconduct (example: the alleged destruction of the environment) or because of our traits (for instance, our innate belligerence) or because of ethical considerations

Very quickly, point number one is possible but grossly improbable, points two to five are essentially the same argument—that we don't yet know where, how and what to look for, and point six violates the non-exclusivity principle (explains some but not all ETI behavior). It's odd that Vaknin selected these particular six arguments. There are many, many potential resolutions to the FP with these not being particularly stronger than any other (though point #1 has a lot of traction among the Rare Earthers.). And where is the Great Filter argument, which is possibly the strongest of them all?

Nice try, Vaknin, but the Great Silence problem is more complex than what you've laid out.

June 25, 2010

For years, the WBP provided a crucial channel for female bioethicists to voice their concerns and support for key biotechnologies at the dawn of the transhuman era. Virtually no topic was off limits, whether it be voluntary euthanasia or the potential for exosomatic wombs. The WBP perspective was a breath of fresh air in a sea littered with bioconservatives, anti-technological feminists and religious conservatives. Not to mention overzealous male techno-optimists.

But it wasn't always this way. Back in 2003 I spoke at Yale about how feminists seemed to be forsaking the future, unwilling to engage in bioethical and biotechnological discourse. It seemed absurd to me at the time that the only people talking about such topics as human trait selection, reproductive technologies, genomics, and stem cell research were geeky white males (myself included). All feminists, it seemed to me at the time, were anti-technological ideologues who were unwilling to discuss the possibilities and what it might mean for women. Donna Haraway's legacy, I thought, had been all but abandoned.

It was with great relief, then, that the Women's Bioethics Project was launched a year later, featuring such writers as Linda MacDonald Glenn, Kristi Scott, Kelly Hills and many others. Indeed, as the blog header proclaimed, "This is not your typical blog. We have recruited scholars and public policy analysts from around the world to provide daily news and commentary on the implications of bioethical issues for women." And as Hinsch noted in her farewell post, "we developed innovative programs, policy recommendations and research on ethical issues pertaining to women’s health, reproductive technologies, and neuroethics. We made a difference: our work brought these important issues to new audiences and encouraged women to participate in policy development around bioethics questions."

And that they did. Their work will be missed, but thankfully many of the WBP alumni will continue to contribute to the IEET.

In Choosing Tomorrow's Children, Stephen Wilkinson looks at the ethics of selection, concentrating mainly on 'same number' decisions that we may make. A 'same number' decision is one in which we have chosen to bring a child to birth, but have not decided which. (A 'different number' decision, by contrast, would be one in which we have to choose whether to reproduce at all.) Put another way, he is concerned with choosing between different possible future people (p5). Within this range, though, there's a number of different situations that may give us cause to want to choose: we might be making decisions about choosing an embryo to act as a 'saviour sibling', choosing an embryo to avoid a certain disability, choosing in favour of a (prima facie) disability - as in the case of Candace McCullough and Sharon Duchesneau, who sought specifically to have a deaf child - or choosing one gender over another. Wilkinson spends time considering all these variations on the 'choosing children' theme, and is guided by a presumption of permissibility - a presumption that everything is permitted unless and until it is forbidden, and that the onus is on the person doing the forbidding to make the case for impermissibility.

As far as Wilkinson is concerned, many (if not most) of the arguments that one might mount to establish the impermissibility of choosing children fail. This principle applies even in relation to controversial decisions such as McCullough and Duchesneau's. For in their case, the strongest argument that they would have to face would in all likelihood have to do with the welfare of the child created thereby: that deafness is welfare-reducing, and that it is wrong deliberately to created a child with lower welfare than it might otherwise have enjoyed. Yet, says Wilkinson, even this claim is weak. Partly this has to do with a scepticism about whether choosing for a disability is necessarily the same as choosing for a lower quality of life; partly it has to do with a claim that, even if disabled, people overwhelmingly have a life worth living, and that since this is the only life they could possibly have lived, there is no sense in which they could be said to suffer from a wrongful life; partly it is because the impersonal 'Same Number Quality Claim' - the idea that we ought to select for a higher quality of life whenever possible - does not reliably tell us that all examples of selecting for disability are wrong, and so, even at its strongest, will not tell us that this particular instance of choosing disability is de facto wrong.

Richard A. Clarke, former head of counterterrorism at the National Security Council, has followed Mr. Kurzweil’s work and written a science-fiction thriller, “Breakpoint,” in which a group of terrorists try to halt the advance of technology. He sees major conflicts coming as the government and citizens try to wrap their heads around technology that’s just beginning to appear.

“There are enormous social and political issues that will arise,” Mr. Clarke says. “There are vast groups of people in society who believe the earth is 5,000 years old. If they want to slow down progress and prevent the world from changing around them and they engaged in political action or violence, then there will have to be some sort of decision point.”

Mr. Clarke says the government has a contingency plan for just about everything — including an attack by Canada — but has yet to think through the implications of techno-philosophies like the Singularity. (If it’s any consolation, Mr. Long of the Defense Department asked a flood of questions while attending Singularity University.)

Mr. Kurzweil himself acknowledges the possibility of grim outcomes from rapidly advancing technology but prefers to think positively. “Technological evolution is a continuation of biological evolution,” he says. “That is very much a natural process.”

Disturbing fact revealed in the article: Google and Microsoft employees trailed only members of the military as the largest individual contributors to Ron Paul’s 2008 presidential campaign.

For a curious and infuriating response to the NYT article, be sure to check out Pete Shank's "A Singular Kind of Eugenics," but be warned: the bullshit factor is off the charts (e.g. Shank is terribly confused about the history of transhumanism, particularly the role and evolution of the Extropy Institute, the World Transhumanist Association, Humanity+ and the Institute for Ethics and Emerging Technologies).

The Singularity Summit for 2010 has been announced and will be held on August 14-15 at the San Francisco Hyatt Regency. Be sure to register soon.

This year's Summit, which is hosted by the Singularity Institute, will focus on neuroscience, bioscience, cognitive enhancement, and other explorations of what Vernor Vinge called 'intelligence amplification' -- the other route to the technological Singularity.

Of particular interest to me will be the talk given by Irene Pepperberg, author of "Alex & Me," who has pushed the frontier of animal intelligence with her research on African Gray Parrots. She will be exploring the ethical and practical implications of non-human intelligence enhancement and of the creation of new intelligent life less powerful than ourselves.

A sampling of the speakers list includes:

Ray Kurzweil, inventor, futurist, author of The Singularity is Near

James Randi, skeptic-magician, founder of the James Randi Educational Foundation

Dr. Anita Goel, a leader in the field of bionanotechnology, Founder & CEO, Nanobiosym, Inc.

Will it be one day become possible to boost human intelligence using brain implants, or create an artificial intelligence smarter than Einstein? In a 1993 paper presented to NASA, science fiction author and mathematician Vernor Vinge called such a hypothetical event a "Singularity", saying "From the human point of view this change will be a throwing away of all the previous rules, perhaps in the blink of an eye". Vinge pointed out that intelligence enhancement could lead to "closing the loop" between intelligence and technology, creating a positive feedback effect.

This August 14-15, hundreds of AI researchers, robotics experts, philosophers, entrepreneurs, scientists, and interested laypeople will converge in San Francisco to address the Singularity and related issues at the only conference on the topic, the Singularity Summit. Experts in fields including animal intelligence, artificial intelligence, brain-computer interfacing, tissue regeneration, medical ethics, computational neurobiology, augmented reality, and more will share their latest research and explore its implications for the future of humanity.

Scientists believe it could damage everything from emergency services’ systems, hospital equipment, banking systems and air traffic control devices, through to “everyday” items such as home computers, iPods and Sat Navs.

Due to humans’ heavy reliance on electronic devices, which are sensitive to magnetic energy, the storm could leave a multi-billion pound damage bill and “potentially devastating” problems for governments.

“We know it is coming but we don’t know how bad it is going to be,” Dr Richard Fisher, the director of Nasa's Heliophysics division, said in an interview with The Daily Telegraph.

“It will disrupt communication devices such as satellites and car navigations, air travel, the banking system, our computers, everything that is electronic. It will cause major problems for the world.

“Large areas will be without electricity power and to repair that damage will be hard as that takes time.”

Dr Fisher added: “Systems will just not work. The flares change the magnetic field on the earth that is rapid and like a lightning bolt. That is the solar affect.”

In 2003, Sinclair made headlines around the world when he announced that the red-wine component resveratrol, which had previously been linked to a reduction in heart disease, extended life span in yeast. He argued that the compound activated one of the sirtuins and proposed that it mimicked the effects of caloric restriction. Sinclair and Westphal launched Sirtris in 2004 with the aim of developing molecules that could stimulate the enzyme much more potently. The company is developing treatments not for aging itself--which the U.S. Food and Drug Administration doesn't consider an illness--but for diseases of aging, such as diabetes, Alzheimer's, and cancer.

As Stipp recounts, hopes for antiaging drugs captured media attention and investors' imaginations. But a different conversation has played out in the academic community. Some scientists doubted whether resveratrol truly targeted the sirtuins. Researchers at drug maker Pfizer also published a study in January questioning whether one of Sirtris's newer compounds targets the enzyme. The study failed to confirm the health benefits seen in earlier trials. To make matters worse, safety concerns have arisen over one of Sirtris's resveratrol compounds. In May, Glaxo announced that it would not expand a clinical trial for multiple-myeloma patients until it better understood why some participants developed a dangerous kidney ailment.

The field of antiaging research is littered with failures, and the controversy over Sirtris's compounds highlights just how difficult it has been to transform exciting scientific discoveries about the aging process into useful drugs. As Stipp illustrates, many candidates with promising antiaging benefits later failed to work in mammals or showed conflicting results.

The article, "Dignity and Agential Realism: Human, Posthuman, and Nonhuman," was in response to Fabrice Jotterand's critique of transhumanism, "Human Dignity and Transhumanism: Do Anthro-technological Devices Have Moral Status?"

I can't republish our entire article at this time, but here's a taste:

The notion that beings exist as individuals with inherent attributes (such as dignity) anterior to their representation, is a metaphysical presupposition that underlies the belief in political, linguistic, and epistemological forms of representationalism (Barad, 2003). Within the framework of representionalism, dignity is most certainly tied into capacity; it is fair, within that framework, to suggest that the diminishment or deliberate withholding of certain attributes results in the lessening of one's dignity (That being said, one should not confuse dignity with the ways in which all human persons deserve equal status in the eyes of the law). Consequently, the inverse also holds true, whether it be the alleviation of a debilitating syndrome or the augmentation of a physical or cognitive characteristic. As far as the agent in question is concerned, human or otherwise, these interventions are dignifying. Representationalism, on the other hand, separates the world into the ontologically disjointed domains of words and things. (Barad, 2003)

A performative understanding, in contrast, contests this metaphysical assumption that dignity is an inherent attribute, as if dignity existed in a vacuum, independently of an individual’s actions or interactions with other beings. A performative understanding shifts the focus from description and questions of correspondence to matters of practice, doings and/or actions; the active participation, phenomena and “intra-actions” (Barad 2003, 2007) are what help create agreed upon meaning. The phrase made popular by M. Scott Peck, “Love is as love does” is an illustration of this shift in understanding; so is Forrest Gump's “Stupid is as stupid does.”

So a performative understanding of dignity includes recognition that the dignity of a person is contingent on the ways in which they are treated by others (including institutions) and the ways in which they are capable of interacting with their external environment. What should not be tied into notions of dignity is the value of persons or the questioning of a person's degree of equality under the law. Nor should dignity tied into the Kassian notion of embodied human life—an inherently speciesist notion that carries with it unjustifiable conditions for exclusionism. Dignity is not status; rather, it is a measurement (or assessment) of the quality in which persons are treated, the depth of their interactions, and the degree to which they are capable of engaging in life. Consequently, a performative understanding of dignity recognizes it as more an issue of treatment and social justice than abstract and confusing notions about equality, value and status.

This one’s kinda hard to swallow so take a deep breath, open your minds, and pretend it’s 2100. I CONTACT is essentially a mouse fitted to your eyeball. The lens is inserted like any other normal contact lens except it’s laced with sensors to track eye movement, relaying that position to a receiver connected to your computer. Theoretically that should give you full control over a mouse cursor. I’d imagine holding a blink correlates to mouse clicks.

The idea was originally created for people with disabilities but anyone could use it. Those of us too lazy to use a mouse now have a free hand to do whatever it is people do when they sit at the computer for endless hours. I love the idea but there is a caveat. How is the lens powered? Perhaps in the future, electrical power can be harnessed from the human body, just not in a Matrix creepy-like way.

Humanity’s foibles will be laid bare. The species’s history, from its tentative beginning in north-east Africa to its current imperial dominion, has already been revealed, just through being able to read the genome. It is now possible, too, to compare Homo sapiens with his closest relative—not the living chimpanzee, with whom he parted company perhaps 5m years ago, but the extinct Neanderthal, a true human. That will do what philosophers have dreamed of, but none has yet accomplished: show just what it is that makes Homo sapiens unique. The genome will answer, too, the age-old question of original sin. By showing what is nature, it will reveal what is nurture—and thus just how flexible and perfectible the human animal really is....Genomics may reveal that humans really are brothers and sisters under the skin. The species is young, so there has been little time for differences to evolve. Politically, that would be good news. It may turn out, however, that some differences both between and within groups are quite marked. If those differences are in sensitive traits like personality or intelligence, real trouble could ensue.

People must be prepared for this possibility, and ready to resist the excesses of racialism, nationalism and eugenics that some are bound to propose in response. That will not be easy. The liberal answer is to respect people as individuals, regardless of the genetic hand that they have been dealt. Genetic knowledge, however awkward, does not change that.

June 22, 2010

I will be on Skeptically Speaking this coming Friday June 25 at 8:00PM EST. I will be having a conversation/debate about transhumanism with World of Weird Things blogger Greg Fish. More specifically, we will "explore the predictions and the problems in the quest to “enhance” human beings."

While the show will be broadcast live over the air on CJSR 88.5 in Edmonton, it will also be made available live over the internet (and eventually distributed to over 22 radio stations across North America). It's also a call-in show, so feel free to call me during the broadcast.

Patenting life was taken a step further in 1984, when Harvard University successfully applied for a patent on its "oncomouse", a laboratory mouse specifically designed to get cancer easily, so that it would be more useful as a research tool. There are good grounds for objecting to turning a sentient being into a patented laboratory tool, but it is not so easy to see why patent law should not cover newly designed bacteria or algae, which can feel nothing and may be as useful as any other invention.

Indeed, Synthia's very existence challenges the distinction between living and artificial that underlies much of the opposition to "patenting life" – though pointing this out is not to approve the granting of sweeping patents that prevent other scientists from making their own discoveries in this important new field.

As for the likely usefulness of synthetic bacteria, the fact that Synthia's birth had to compete for headlines with news of the world's worst-ever oil spill made the point more effectively than any public-relations effort could have done. One day, we may be able to design bacteria that can quickly, safely, and effectively clean up oil spills. And, according to Venter, if his team's new technology had been available last year, it would have been possible to produce a vaccine to protect ourselves against H1N1 influenza in 24 hours, rather than several weeks.

Only in the past decade have we started to realize that transhumanism won’t realize its dreams through mechanization and computerization. Though seminal authors on transhumanism, like Kurzweil, Moravec, Drexler, and More focus on nanotechnology and cybernetics, those technologies haven’t seen real progress since the 70’s.

But genetics and biotech has. Starting in the 1950’s with the Pill, vaccines, and antibiotics, our knowledge of medicine and biology radically improved throughout the second half of the twentieth century with assisted reproduction technologies like IVF, not to mention genomic sequencing, stem cell research, organ transplantation, and neural mapping, advances in biology and medicine are what are driving the transhumanist revolution. When someone like Mark Gubrud starts arguing transhumanism won’t work because we can’t upload our minds into robot bodies, one has to gawk for a moment in awe at the irrelevance of the argument. It’s like arguing we can’t ever cure cancer because cold fusion is impossible.

Transhumanism is the idea of guiding and improving human evolution with intention through the use of technologies and culture. If those technologies are not robotic and cybernetic but, instead, genetic and organic, then so be it. And that seems to be the way things are going.

Traditional values of looking at gender in binary fashion grow less and less important as scientists show that gender identity is diverse in nature and is caused by many biological and social conditions. If one were to look at the pure science of gender identity, it not only appears that a postgender society is possible but it seems we are already living in one.

IEET Chair Nick Bostrom discusses the Great Silence with Robert Lawrence Kuhn on Closer to the Truth. Nick and I are totally on the same wavelength here, including our agreement over the suggestion that the discovery of life in the solar system would be bad news.

June 21, 2010

Advice for college students and graduating high-shoolers. Reflecting on his son's graduation from high school, Science Fiction author David Brin offers inspiration and advice for students going on to college. Broaden your perspectives and take full advantage of the wealth of educational experiences awaiting you during the next four years. The key is curiosity. Among several tricks offered: explore what is happening in those buildings on campus. Once a month, pick a building and randomly knock on doors! What’s the worst that can happen? What’s the best?

This one has gone viral, with 5,000 hits in the first day! (Hint: you folks could also spread the word.) Great (and highly unusual) advice for that bright young college-bound grad.

=OTHER NEWS... then controversy... and science! =

Back in 1985 I was the very first author in Bantam's (Randomhouse) science fiction line SPECTRA. Now this famed, accomplished publishing line is celebrating its 25th anniversary. Time flies and the future rushes upon us. Congratulations Spectra!

I’ve been on more than thirty television shows. I’ve had one novel filmed and others scripted. But now comes my first appearance on the big screen! “The People vs. George Lucas” premieres June 23 at the Los Angeles Film Festival. I was interviewed for this provocative documentary -- along with many passionate fans and foes of the popular Lucasian universe. I’ve already stated my opinion, as editor and ‘prosecutor’ in the book Star Wars on Trial, which offers every pro/con perspective in more detail - a real treat for fans of intellectual dissections of po-culture! But for a lighter-fun scan, the movie is coming soon to a theater not too far away…

And yet-more podcasts! Especially for TedX Munich, I performed a 10 minute video talk entitled: “Ambitious Problem-solving for the Future” It's too easy to lapse into negativity/pessimism about the problems we face: war, political instability, economic trouble, global warming. Indeed, vast inequalities of wealth exist across the globe. To keep things in perspective, we should recall that things were nearly always worse in the past. We must develop innovative problem-solving skills to face the complex world of the future – and to raise standards of living across the world. For the first time, the entire world community is able to communicate -- across borders and nationalities -- to share strategies and seek solutions. My favorite aphorism: Criticism is the only known antidote to error. Identifying errors is the first step toward seeking solutions. But we must keep in mind the goal – to improve our civilization. Technology must be part of the solution.

And now exciting news that I predicted... Andrew Wade, an avid player in the two-dimensional, mathematical universe known as the Game of Life, posted his self-replicating mathematical organism on a Life community website on 18 May. It sparked a wave of excitement. And might I note that I foresaw this would happen, in my novel GLORY SEASON? Someone log in and congratulate him, on my behalf?

On the other hand, I’ve long been a champion of openness, e.g. in my nonfiction book, The Transparent Society: Will Technology Make Us Choose Between Privacy and Freedom? In fact, I’ve long held that some millionaire could do more to save freedom and civilization than any other person on the planet, by funding an entirely new approach to encouraging whistle-blowers. (It really is a cool idea!) Indeed, I was generally approving of the endeavor known as WikiLeaks... an attempt to create a clearinghouse online for people to expose what they perceive as wrongdoing.

Alas, the core person at Wiki-Leaks appears to be on the run from authorities who want to nail a member of the military who disseminated a large number of of classified documents from his post in Iraq. In fact, the matter is more complicated than it seems, at surface. (I will opine further on this - informally - under "comments" below.)

= And Now Lighthearted (intellectually satisfying) Fun! =

Go read some of ther terrific “fanfic” or fan-generated fiction out there. Here’s a great example: futurist/scholar Eliezer Yudkowsky’s ongoing series/novel that is both a tribute to - and deconstruction of - J.K. Rowling’s fantasy universe. HARRY POTTER AND THE METHODS OF RATIONALITY poses an alternate world in which Harry is a genius, not only at magic but also the muggle wizardries of math and science. Oh, and his step-parents, instead of being cartoony/silly villains, were wise, decent and smart. (In other words, Dumbledore did not commit a horrific crime, but put him with the best muggles he could find, duh?) The result is a fiercely bright, logical and infuriatingly immature 11-year old prodigy who is loyal to science and progress and the Enlightenment, unleashed on poor Hogwarts School, vowing to up-end the corrupt, horrific and insular society that is Magical Britain.

It’s a terrific series, subtle and dramatic and stimulating. I liked especially hearing the vocal rhythms of Maggie Smith in dialogue with Professor McGonnagle. And I (naturally) I loved the dissing of Yoda! Yudkowsky gets it, and lots else. Smart guy, good writer. Poses hugely terrific questions that I, too, had thought of... and a number that I hadn't. Enjoyed all references to the enlightenment.

I wish all Potter fans would go here, and try on a bigger, bolder and more challenging tale.

When the Turing Test is not enough: Towards a functionalist determination of personhood and the advent of an authentic machine ethics

Empirical research that works to identify those characteristics requisite for the identification of nonhuman persons are proving increasingly insufficient, particularly as neuroscientists further refine functionalist models of cognition. To say that an agent "appears" to have awareness or intelligence is inadequate. Rather, what is required is the discovery and understanding of those processes in the brain that are responsible for capacities such as self-awareness, empathy and emotion. Subsequently, the shift to a neurobiological basis for personhood will have implications for those hoping to develop self-aware artificial intelligence and brain emulations. The Turing Test alone cannot identify machine consciousness; instead, computer scientists will need to work off the functionalist model and be mindful of those processes that produce awareness. Because the potential to do harm is significant, an effective and accountable machine ethics needs to be considered. Ultimately, it is our responsibility as citizen-scientists to develop a rigorous understanding of personhood so that we can identify and work with machine minds in the most compassionate and considerate manner possible.

I'm back home after several intense and wonderful days at Harvard University attending the Humanity+ Summit. Spare time is not an ally of mine right now, but I hope to blog about the conference over the course of the next few days and weeks. So much to talk about -- from my one-on-one conversation with Stephen Wolfram through to visiting the Center for Brain Science Neuroimaging.

The chief minister's [Mohamad Ali Rustam of Malacca] comment is yet another illustration of the generally regressive influence that religion has on ethical issues – whether they are concerned with the status of women, with sexuality, with end-of-life decisions in medicine, with the environment, or with animals. Although religions do change, they change slowly, and tend to preserve attitudes that have become obsolete and often are positively harmful.

"Go forth and multiply" was a reasonable idea when the world had a few million humans in it. Now, unrestricted multiplication of our species has become a grave risk to the environment of our planet, and a significant cause of infant mortality and poverty. Yet some religious leaders continue to condemn not only abortion, but also contraception, and their condemnation of homosexuality also has the same roots in the non-reproductive nature of same-sex relationships.

In the same way, there has been great progress, worldwide, in attitudes to animals over the past century, but some religious believers, such as Mohamad Ali Rustam, remain stuck with attitudes that were formed many centuries ago.

Independently of the problems of reactionary religious belief, the trend to establish animal testing facilities in countries with weak or no regulations is an extremely worrying one. As regulations improve in Europe, North America, Australia and other countries, it seems that unscrupulous entrepreneurs are engaged in a race to the bottom.

If we are concerned about the exploitation of human workers in countries with low standards of worker protection, we should also be concerned about the treatment of even more defenceless non-human animals. At present, the only hope of reversing this trend seems to be pressure on companies not to test their products in countries without good animal welfare regulations, and pressure on research institutions not to have links with such countries. But to unravel the connections and make them clear to consumers is, unfortunately, going to be a difficult task.

Specifically, WA recently added data on health indicators for more than 200 countries and territories. They now have World Health Organization data on health care workers, immunizations, water and sanitation, preventive care, tobacco use, weight, and more.

Data is also now available on specific types of health care personnel, such as physicians, nurses, and dentists, and Wolfram|Alpha can also compute per capita figures for each type of health professional. Other interesting indicators include figures on hospital beds, drinking water and sanitation, tobacco use, weight and obesity, and reproduction and contraception.

June 6, 2010

Australian prof Roger Clarke says that cyborg rights need to be debated now; Cyborgs are alive and well today and asserting their rights, presenting society with a challenge that needs to be met head on:

Dr Clarke says as cyborgisation is increasingly used in the medical arena, people may expect they have the right to have technology that keeps them alive.

"They may also want the right to have the technology removed when they want to die", he said.

In summary, says Dr Clarke, cyborgisation of humans is leading to a plethora of questions about human rights.

"People who are using prostheses to recover lost capabilities will seek to protect their existing rights. People who have lost capabilities but have not yet got the relevant prostheses will seek the right to have them," Dr Clark said.

"Enhanced humans will seek additional rights to go with the additional capabilities that they have."

Dr Clarke says engineers and others who develop these new technologies have an obligation to brief political, social and economic institutions on their implications.

"They have to date signally failed to do so, and urgent action is needed," Dr Clarke said.

"The need for policymakers to wake up to themselves and get debating things is becoming more acute."

Enthusiasts are building machines that can make just about anything – including their own robotic offspring. New Scientistexplains:

Still, ingenious as these machines are, they merely churn out piles of parts. What about assembly? A heap of plastic and metal is not a machine, just as you don't have much in common with a pile of flesh and bones.

Greg Chirikjian, a roboticist at Johns Hopkins University in Baltimore, Maryland, agrees. "When a prototype only makes parts, the machine that made those parts wasn't reproduced," he says. A true self-replicator must handle both fabrication and assembly. Chirikjian and his colleague Matt Moses are aiming to achieve this with a kind of Lego set that doesn't need anyone to play with it.

The pair have already demonstrated key parts of such a system, using around 100 plastic blocks. Although it cannot yet fabricate these blocks itself, the machine is able to move in 3D to pick up and bind them into larger structures. Moses is currently working on having it make a complete replica of its own structure using Lego-like bricks, though the machine still relies on conventional motors - which have to be installed by hand - to drive its activity.

What has confounded fake-meat producers for years is the texture problem. Before an animal is killed, its flesh essentially marinates, for all the years that the animal lives, in the rich biological stew that we call blood: a fecund bath of oxygen, hormones, sugars and plasma. Vegan foods like tofu, tempeh (fermented soy) and seitan (wheat gluten) don't have the benefit of sloshing around in something so complex as blood before they go onto your plate. So how do you create fleshy, muscley texture without blood?

Put aside what we do to other species — that’s a different issue. Let’s assume that the choice is between a world like ours and one with no sentient beings in it at all. And assume, too — here we have to get fictitious, as philosophers often do — that if we choose to bring about the world with no sentient beings at all, everyone will agree to do that. No one’s rights will be violated — at least, not the rights of any existing people. Can non-existent people have a right to come into existence?

I do think it would be wrong to choose the non-sentient universe. In my judgment, for most people, life is worth living. Even if that is not yet the case, I am enough of an optimist to believe that, should humans survive for another century or two, we will learn from our past mistakes and bring about a world in which there is far less suffering than there is now. But justifying that choice forces us to reconsider the deep issues with which I began. Is life worth living? Are the interests of a future child a reason for bringing that child into existence? And is the continuance of our species justifiable in the face of our knowledge that it will certainly bring suffering to innocent future human beings?

This documentary offers some interesting insights into how difficult it is to both develop and predict the next iteration of the Web. I can't help but feel, however, that human cognition is missing from the discussion; in my mind, the next iteration of the web has to be further conceptualized as a part of the exosomatic brain. Anything we can do to better streamline the process of accessing and processing information will be a step in this direction. Put another way, how can we blur the divide and reduce the friction that currently separates the human mind from the internet?

Why good memory may be bad for you: The counterintuitive finding that too good a memory makes foragers inefficient reveals a glimpse of the forces that govern the evolution of intelligence.

From the article:

It's easy to imagine that a good memory would confer significant benefits to a foraging animal.

But it's not quite so straightforward, say Denis Boyer at Universite Paul Sabatier in France and Peter Walsh at the Universidad Nacional Autonoma de Mexico in Mexico.

These guys have created one of the first computer models to take into account a creature's ability to remember the locations of past foraging successes and revisit them.

Their model shows that in a changing environment, revisiting old haunts on a regular basis is not the best strategy for a forager.

It turns out instead that a better approach strategy is to inject an element of randomness into a regular foraging pattern. This improves foraging efficiency by a factor of up to 7, say Boyer and Walsh.

Clearly, creatures of habit are not as successful as their opportunistic cousins.

The case for digitally-driven stupidity assumes we'll fail to integrate digital freedoms into society as well as we integrated literacy. This assumption in turn rests on three beliefs: that the recent past was a glorious and irreplaceable high-water mark of intellectual attainment; that the present is only characterized by the silly stuff and not by the noble experiments; and that this generation of young people will fail to invent cultural norms that do for the Internet's abundance what the intellectuals of the 17th century did for print culture. There are likewise three reasons to think that the Internet will fuel the intellectual achievements of 21st-century society.

First, the rosy past of the pessimists was not, on closer examination, so rosy. The decade the pessimists want to return us to is the 1980s, the last period before society had any significant digital freedoms. Despite frequent genuflection to European novels, we actually spent a lot more time watching "Diff'rent Strokes" than reading Proust, prior to the Internet's spread. The Net, in fact, restores reading and writing as central activities in our culture.

The present is, as noted, characterized by lots of throwaway cultural artifacts, but the nice thing about throwaway material is that it gets thrown away. This issue isn't whether there's lots of dumb stuff online—there is, just as there is lots of dumb stuff in bookstores. The issue is whether there are any ideas so good today that they will survive into the future. Several early uses of our cognitive surplus, like open source software, look like they will pass that test.

For Carr, the analogy is obvious: The modern mind is like the fictional computer. "I can feel it too," he writes. "Over the last few years, I've had an uncomfortable sense that someone, or something, has been tinkering with my brain, remapping the neural circuitry, reprogramming the memory." While HAL was silenced by its human users, Carr argues that we are sabotaging ourselves, trading away the seriousness of sustained attention for the frantic superficiality of the Internet. As Carr first observed in his much discussed 2008 article in The Atlantic, "Is Google Making Us Stupid?," the mere existence of the online world has made it much harder (at least for him) to engage with difficult texts and complex ideas. "Once I was a scuba diver in a sea of words," Carr writes, with typical eloquence. "Now I zip along the surface like a guy on a Jet Ski."

This is a measured manifesto. Even as Carr bemoans his vanishing attention span, he's careful to note the usefulness of the Internet, which provides us with access to a near infinitude of information. We might be consigned to the intellectual shallows, but these shallows are as wide as a vast ocean.

Nevertheless, Carr insists that the negative side effects of the Internet outweigh its efficiencies. Consider, for instance, the search engine, which Carr believes has fragmented our knowledge. "We don't see the forest when we search the Web," he writes. "We don't even see the trees. We see twigs and leaves." One of Carr's most convincing pieces of evidence comes from a 2008 study that reviewed 34 million academic articles published between 1945 and 2005. While the digitization of journals made it far easier to find this information, it also coincided with a narrowing of citations, with scholars citing fewer previous articles and focusing more heavily on recent publications. Why is it that in a world in which everything is available we all end up reading the same thing?

June 4, 2010

This is what happens to a video after it has been uploaded, downloaded and re-uploaded to YouTube 1,000 times. Straight copying of digital data is (mostly) lossless -- it's compression and conversion that creates this sort of nastiness. Some of this degradation is noticeable even after only one generation (MP3s as an example).

Buddhist magazine Tricycle recently interviewed the IEET's James Hughes about his unique take on transhumanism and Buddhism -- and how the two seemingly disparate philosophies should be intertwined.

Excerpt:

As a former Buddhist monk, Professor James Hughes is concerned with realization. And as a Transhumanist—someone who believes that we will eventually merge with technology and transcend our human limitations—he endorses radical technological enhancements to humanity to help achieve it. He describes himself as an “agnostic Buddhist” trying to unite the European Enlightenment with Buddhist enlightenment.

Sidestepping the word “happiness,” Hughes’ prefers to speak of “human flourishing,” avoiding the hedonism that “happiness” can imply.

“I’m a cautious forecaster,” says Hughes, a bioethicist and sociologist, “but I think the next couple of decades will probably be determined by our growing ability to control matter at the molecular level, by genetic engineering, and by advances in chemistry and tissue-engineering. Life expectancy will increase in almost all countries as we slow down the aging process and eliminate many diseases.” Not squeamish about the prospect of enhancing—or, plainly put, overhauling— the human being, Hughes thinks our lives may be changed most by neurotechnologies—stimulant drugs, “smart” drugs, and psychoactive substances that suppress mental illness.

Richard Eskow, who did the interview, followed it up with a rebuttal of sorts: Cerebral Imperialism. In the article he writes,

Why “artificial intelligence,” after all, and not an “artificial identity” or “personality”? The name itself reveals a bias. Aren’t we confused computation with cognition and cognition with identity? Neuroscience suggests that metabolic processes drive our actions and our thoughts to a far greater degree than we’ve realized until now. Is there really a little being in our brains, or contiguous with our brains, driving the body?

To a large extent, isn’t it the other way around? Don’t our minds often build a framework around actions we’ve decided to take for other, more physical reasons? When I drink too much coffee I become more aggressive. I drive more aggressively, but am always thinking thoughts as I weave through traffic: “I’m late.” “He’s slow.” “She’s in the left lane.” “This is a more efficient way to drive.”

Why do we assume that there is an intelligence independent of the body that produces it? I’m well aware of the scientists who are challenging that assumption, so this is not a criticism of the entire artificial intelligence field. There’s a whole discipline called “friendly AI” which recognizes the threat posed by the Skynet/Terminator “computers come alive and eliminate humanity” scenario. A number of these researchers are looking for ways to make artificial “minds” more like artificial “personalities.”

I'll be at the H+ Summit @ Harvard during the weekend of June 11-12 and I hope to see you there. The Summit is an educational, and scientific outreach event that covers the themes of the impact of technology on the human condition. It is hosted, and organized by the Harvard College Future Society, in cooperation with Humanity+.

Weaving in futurism, technoprogressivism and transhumanism, the H+ Summit is part of a larger cultural conversation about what it means to be human and, ultimately, more than human. This issue lies at the heart of the transhumanist movement -- and a common topic on this blog.

When the Turing Test is not enough: Towards a functionalist determination of personhood and the advent of an authentic machine ethics

Abstract: Empirical research that works to identify those characteristics requisite for the identification of nonhuman persons are proving increasingly insufficient, particularly as neuroscientists further refine functionalist models of cognition. To say that an agent "appears" to have awareness or intelligence is inadequate. Rather, what is required is the discovery and understanding of those processes in the brain that are responsible for capacities such as self-awareness, empathy and emotion. Subsequently, the shift to a neurobiological basis for personhood will have implications for those hoping to develop self-aware artificial intelligence and brain emulations. The Turing Test alone cannot identify machine consciousness; instead, computer scientists will need to work off the functionalist model and be mindful of those processes that produce awareness. Because the potential to do harm is significant, an effective and accountable machine ethics needs to be considered. Ultimately, it is our responsibility as citizen-scientists to develop a rigorous understanding of personhood so that we can identify and work with machine minds in the most compassionate and considerate manner possible.

George Dvorsky

Canadian futurist, science writer, and ethicist, George Dvorsky has written and spoken extensively about the impacts of cutting-edge science and technology—particularly as they pertain to the improvement of human performance and experience. He is a contributing editor at io9, the Chairman of the Board at the Institute for Ethics and Emerging Technologies and is the program director for the Rights of Non-Human Persons program.