Mitochondrial Singularity

As Charlie points out, there are lots of ifs and buts about the coming singularity, the day when machine intelligence finally overtakes the human mind. But what if the singularity is already underway? And if it is--what does it look like?

Suppose it looks like mitochondria. Suppose we're becoming the mitochondria of our machines.

How did mitochondria get to what they are today? The (now classic) theory of endosymbiosis began as a New-age feminist plot by Lynn Margulis, a microscopist known for setting paramecium videos to rock music. Around one or two billion years ago, a bacterium much like Escherichia coli took up residence within a larger host microbe. Either the larger tried to eat the smaller (like amebas do), or the smaller tried to parasitize the larger (like tuberculosis bacteria do). One way or another, their microbial descendants reached a balance, where the smaller bacterium was giving something useful to the host, and vice versa. In fact, this sort of thing happens all the time today. If you coculture E. coli with amebas, an occasional ameba will evolve with bacteria perpetually inside--and the evolved bacteria can no longer grow outside. They are slipping down the evolutionary slide through endosymbiosis, to eventual become an organelle.

But the price of endosymbiosis is evolutionary degeneration. Genetically, the mitochondrion has lost all but a handful of its 4,000-odd bacterial genes, down to 37 in humans. Most of these genes conduct respiration (obtaining energy to make ATP). From the standpoint of existence as an organism, that seems pathetic. The mitochondrion is a ghost of its former identity.

But is it so simple? Did mitochondria really stay around just for that one function? If that’s all the genes that are left, then how do mitochondria contribute to tissue-specific processes such as apoptosis (programmed cell death), production of oxygen radicals, and even making hormones?

Surprise--about 1,500 of those former mito genes are alive and well in the nuclear chromosomes. How did the genes get there? First, mitochondrial DNA replication is error-prone; errors accumulate there much faster than in the nuclear DNA. Second, DNA replication often duplicates genes--the leading way to evolve new functions. Suppose a duplicated gene ends up in the nucleus. It will stay there, while the mitochondrial original decays by mutation. Thus, over many generations, the mitochondria outsource their genes to the nucleus.

Is this starting to sound familiar? As Adam Gopnik writes, "We have been outsourcing our intelligence, and our humanity, to machines for centuries." Long ago, since Adam and Eve put on clothes (arguably the first technology) we have manipulated parts of our environment to do things our bodies now don't have to do (like grow thick fur). We invented writing, printing and computers to store our memories. Most of us can no longer recall a seven-digit number long enough to punch it into a phone. Now we invent computers to beat us at chess and Jeopardy, and baby-seal robots to treat hospital patients.

As we invent each new computer task, we define it away as not "really" human. Memory used to be the mark of intelligence--before computers were invented. Now it's just mechanical--but as Foer notes in Moonwalking with Einstein, memory is closely tied to imagination. Once we can no longer remember, how shall we imagine? And if all our empathy is outsourced to dementia-caring robots that look and sound like baby seals, what will be left for us to feel? Poetry and music--don't mention it, computers already compose works that you can’t distinguish from human.

Yet we humans still turn the machines on and off (well... sometimes). The machines aren't actually replacing us, so much as extending us. That's the world of my Frontera series. Humans still program the robots and shape the 4D virtual/real worlds we inhabit. But those worlds now shape us in turn. Small children exhibit new reflexes--instead of hugging their toys, they poke and expect a response.

The real question is, what will be the essential human thing left that we contribute to the machines we inhabit? Will we look like the "brainship" of Anne McCaffrey's The Ship who Sang--or more like the energy source of the Matrix? Mitochondria-hosting cells ushered in an extraordinary future of multicellular life forms, never possible before. Human-hosting machines may create an even more amazing future world. But if so, what essential contribution will remain human?

97 Comments

It's basically an infertility treatment that involves implanting the egg nucleus from an infertile woman's egg into a donor egg (whose nucleus has been removed). There will be only two sets of nuclear material, but the mitochondrial DNA will come from the donor, who thus ends up as the 'third parent'.

The 37 or so genes in that DNA are vastly outweighed by the tens of thousands of genes from the 'real' parents, and probably outweighed by the genes from whatever microbial biome the child ends up with too. But if the original mitochondrial DNA ended up in the nucleus, there's even less justification to say that there are three parents.

You're right, "three parents" is a misnomer. But interestingly, if people get used to that idea, will it be a "gateway" to more extreme genetic modifications, where you actually borrow somebody else's chromosome.

Regarding this post, does it need to be a zero sum game? Toddlers have iPads these days, sure, or else they quickly learn their way around their parent's smart phones. But do they stop hugging teddys, too?

Likewise, I rely on Wikipedia to recall information that I haven't memorized (who was that director? I know they helmed "Brick", I'll look it up) but that doesn't mean I don't actively remember as much as I can. Wikipedia helps my blog posts, that is, but it does fuck all for my conversations and daily interactions.

The effect, I think, is comparable to dictionaries/thesauruses on our passive and active vocabularies. I forget the real figures, but as an approximation we can actively use 100,000 words and passively recognize 300,000 words. Lexicons help us source our passive vocab for the purposes of our writing, but that doesn't mean that lexicons have diminished our active vocab.

Secondarily, you're talking of evolutionary forces that only the most privileged would be privy to. If 20%, or 8%, or 1% of the world's population is subject to such a lifestyle where machines replace most of their functions then how will we have a cumulative evolutionary effect?

I wouldn't say we're using technology to outsource functions, more to augment them. Technology is basically the outward expression of cultural evolution, if we look at it in the right way. The earliest known examples of recognisable writing (cuneiform) are documents keeping track of inventory - how much of what was in storage where. Not so much a replacement for human memory as an augmentation of it, a way of making sure there was a "memory" of the harvest in case the original storehouse keeper wasn't available. It was a way of taking a memory out of one specific person's head, and putting it where someone else who wasn't that person could find it - a way of sharing memories.

After a while, people realised this could be used for more than just remembering whose crops were stored where in the storehouse. We started recording things like our stories about how we became who we were (in these enlightened times, we call these stories myth and legend). We could record rituals, ways of pleasing the gods. We could create a tradition which lasted more than a generation or two before being forgotten, or passing out of fashion. We could make things endure.

We could also use it to augment what our minds could do by way of thinking - it was possible to put together ideas in a way which allowed for extended construction, creating cities and temples and pyramids, because one person could come up with the idea of how to do it, and then others could improve on the design over the years, make changes. We could figure out the best way to apply the knowledge we gained, and we could accumulate the knowledge together, make collections of it, make it into something special.

Memory is still special, still human (and still a mark of intelligence - being able to pass exams without needing the prosthetic memory of an electronic device to prompt us is an ability or skill we still select for at the upper levels of our educational systems). The core of memory, things like our earliest memories of things we've done, places we've been - the individual memory of our lives, and the way we shape it and it shapes us, those are still needed. Thought (in the sense of creativity) is something we retain as well, that's still defined as being solely a property of humankind. Sure, we can mock up a simulacrum of creativity with a randomising algorithm (... but where did the algorithm come from?) and get a computer to slap things together in a random fashion and see what happens. But the process of taking known elements, putting them together in a known fashion, and still coming up with something new at the other end? Storytelling, for example? That's human.

We haven't yet created a computer which can tell us stories equivalent to the ones we tell ourselves. I somehow doubt we ever will.

Maybe we are passing out our humanity, surrendering it to our technological augments, sacrificing one half of our capability to view the universe in all its glory in exchange for the wisdom technology gives us. But even so, I suspect the one-eyed god that humanity becomes will still retain Huginn and Munnin (thought and memory) reporting back to us on all the world's doings, and our greed and hunger (Geri and Freki) will still sit by our sides, threatening to consume us should they get out of hand.

If you look at memory research, it's scary. We don't really "remember;" we create memories. It's easy to erase our chip and implant things. And we've certainly lost the ability to "remember" the way the Medievals did. Only with intensive training can we remember a string of twenty objects heard once, let alone, say a hundred objects. Read Moonwalking with Einstein.

First off, I've got to take you to task about "evolutionary degeneration." I know you know better than that. Mitochondria have continued to evolve inside cells, and they continue to this day. That's why evolutionary biologists can use mitochondrial genomes to study the evolution of their hosts. Horizontal gene transfer within the community of the cell isn't evolutionary degeneration. Evolution does NOT have an arrow pointing up (towards complexity) or down (towards "degeneration"). It only has an arrow pointing forward towards survival, whether that involves complexity or simplicity.

Personally, I think we're already in the singularity, at least compared with the normal pace of human discovery. Our machines are outstripping us in all manner of things (e.g. Google), but people keep moving the bar on what constitutes "human intelligence." They've found it's infeasible to make a computer that acts precisely like a human, just as they've found it's infeasible to make a nuclear submarine that swims like a fish or a fighter jet that flies like a bird. All of these bits of tech move faster and higher than their animal counterparts, but they can't simultaneously do things that animals routinely do for a number of technical reasons, mostly having to do with keeping their crews alive inside.

The real problem going forward is that humans are likely to become more like ants. In this, I don't mean that we're going to become eusocial hive creatures. Rather, ants form the majority of most insect communities in which they occur (as do humans in mammalian communities). As a result, they are the targets for a vast number of predators, pathogens, parasites, and symbiotes. Humans are certainly following suit in the symbiosis game (we call it domestication), but as we continue to be the dominant mammal, parasitizing us will be a dominant evolutionary driving force on everything that can pierce our skin or enter our gut. Or eat our food before we get to it.

Even more ironically, we'll have more social parasites. In this case, I don't necessarily mean rentiers. Instead, I'm thinking of all those pets that people call their "children." I can't think of a better example of a social parasite than that, a non-human animal that takes the place of offspring. And unlike ants (which have lots of non-ant social parasites), we've deliberately bred our own social parasites, animals like toy poodles for example. I have nothing against dogs and cats, but they aren't our offspring. I prefer to keep them around as mutualists, and find a way for them to earn their keep. If they're replacing the child you could have had, you've autodarwinated yourself.

I remember a year or two ago, a documentary on the BBC which was an historical compilation ofdocumentary material on dogs, with a commentary on how things had changed.

One of the big mistakes was that the "dogs are wolves" hypothesis led to attempts to understand dogs by studying wolves in zoos. As people began to study wolves in the wild, it was noticed they behaved differently. Because the wolves in the zoo couldn't get away from the pack, they behaved differently.

Remember Barbara Woodhouse and her dog training? That was ultimately rooted in the zoo-wolf model. It finished with an Army bomb-sniffer dog. And it was clearly a totally different style of training. The dog was doing these things because they were fun, not because the handler was an Alpha.

I realised that over the years, that is pretty well how we treated our pets, dogs and cats. And you could see signs of that going back through the generations, in the stories my father told. There was the lurcher cross that was my grandfather's sheepdog. It didn't look like any sort of collie, but it behaved like one.

It's not that there aren't moments of Woodhouse-style discipline in the mix, there are times when you need to have control. But the balance was different.

Nobody would argue that dogs were not wolves, however mucked about with by domestication and the sometimes insane breeding that can happen. So, when we're talking about mitochondria, what might we have missed? Is there a zoo-wolf error?

An obvious thought: what genes have passed from the host cell to the mitochondria? Is it really a one-way path? The mitochondria have to be female line, but have there never been genes from the Y-chromosome hitching a ride?

How can we tell? The markers in our genes are suggesting a rather different history of such things as Celticness but could we spot a hitchhiker? I don't know. But sequencing individual genomes is getting easier and easier. Is the problem one of analysis rather than data collection?

Heteromeles, if you prefer the term evolutionary "reduction" in terms of reduction of gene content, that does happen, and did for the mitochondrial genome proper. It is driven by energy efficiency, not teleologically, but by natural selection; the proto-mitochondria that lost genes fastest produced progeny fastest within the host. In fact (sadly) this continues to happen to mitochondria today, within the somatic cells of a person; it can lead to late mitochondrial defects, and possibly aging (that's controversial).

zhochaka: the migration of mitochondrial genes into the nucleus appears to be more-or-less a one-way ratchet, but evidence one way or the other is slim: after all, until quite recently we thought mitochondrial DNA was always passed exclusively down the female line (not *quite* so) and didn't realize that it recombined (recombination appears to be quite frequent, but normally you can't tell because except in those freak cases where paternal mitochrondrial DNA gets in, the recombiners are near-clones of each other).

However, though it is a one-way ratchet it is *not* a total one. No DNA-less mitochondria have ever been found, despite two billion years of ratcheting (some DNA-less descendants of mitochondria have been found in some organisms, but all have lost their respiratory function). It appears that we will never have DNA-less mitochondria, and though species differ in what genes have been retained, genes involved in the initial construction of new transmembrane proteins involved in the respiratory chain are almost always retained.

Nick Lane and others have speculated (convincingly, to this layman at least) that this is necessary so that each mitochondrion can individually control the synthesis of new respiratory complexes as needed, rather than the rate being crudely controlled on a whole-cell level.

btw, mitochrondria aren't all that much like E. coli. They're quite like the rickettsiae, though (clearly highly conserved, or there'd be almost nothing left in common after four billion or so years of combined evolutionary divergence time).

Nix, "mitochondria" are defined by having DNA, but some protozoa have mitosomes or hydrogenosomes for which convincing evidence suggests they evolved by reduction from mitochondria (with no DNA left). So it can go all the way.

That is interesting about the paternal mitochondrial DNA; does it arise during development of the spermatozoa? Or does the sperm neck get internalized? Probably no one but me is interested, but I'd like to see the reference.

& @ 6
"Stories" huh?
Charlie just said that a computer that could do that, or properly analyse one would be AI-complete, IIRC.

Commensals / Pets
Well, some so-called "domestic" cats are getting quite good at discrimination of their humans' inanimate objects.
The tom-kitten here can distinguish between tea-cup (boring) & milk-jug ... "oh, you've put WATER in it - poo!"

zhochaka @ 8
NOT an hypothesis - dogs are wolves.
But, they are socialised wolves, treated differently in "their human" packs. In the sane way as the teme Russian Arctic foxes behaviours' have changed with domestication.
And the same can be seen in some urban foxes.
(Provided some stupid bastards don't throw things at them - I had one almost hand-tame about three months back, now someone has scared him ...)

It isn't clear to me that the mitochondrial analogy is correct, especially with regard to machines being the equivalent of the cell. I see the cell in our terms as the super organism of human society. So the ant analogy is closer.

I also agree with megpie that we are not losing functions overall. Our cognitive plasticity allows us to acquire new skills while losing old ones. We give up memory capability but gain new ones that are more immediately useful. (Unless Charlie's "peak brain" hypothesis is true, which is another factor).

I tend to think heteromeles host-parasite analogy is more apropos, although I see computing machines as part of the parasite (hopefully more mutualistic) community. (It is hard for me to see the value of Facebook, which looks more like a cognitive parasite to my eyes, which in turn out competed tv).

So every time you read an article by a designer involved in information processing machinery, about how the interface was better, think about how that person is in league with the machines to allow them to extract more resources from you (time, money and information).

One of the big mistakes was that the "dogs are wolves" hypothesis led to attempts to understand dogs by studying wolves in zoos.

The reverse can be a problem too. I talked to guy who was helping reintroduce wolves many years ago who said that folk often had the wrong interpretation of behaviour with wild dogs/wolves/etc. because they'd been trained by dogs and the movies on how they behaved.

If a dog is being aggressive it's tail is down, it growls, bares teeth, etc. Ditto for "wolves attacking" in movies. But that is pack behaviour. They're dominance demonstrations dealing with inter-pack hierarchies. It's just that "dogs" consider humans to be pack members (at least any sanely socialised dog does).

Wolves don't do that to food. When a wolf pack is hunting it's having a f**king great time!

So when somebody sees a wild dog/wolf approaching looking "dog" cheerful it's probably just thinking "Wow! That would go down great with some ketchup!".

Karl Schroeder explores this sort of relationship in some of his work. In his Virga series humans and other life provide the machines with desires. Machines don't want things but without that they don't do much. Life wants all kinds of things, air, light, reproduction.

Thanks. Genetic degeneracy has a lot of moral baggage attached to it, dating back to the 1920s and 1930s, what with Eugenics and other ideas about which races were better and worse.

I don't mind talking about medical degeneration, like macular degeneration. However, equating evolutionary simplification with evolutionary degeneration is problematic. For example, a horse's hoof is degenerate when compared to a lizard's foot. It has fewer bones and less structure, and I'm willing to bet there are a bunch of non-functional genes involved. However, the horse's hoof is adapted to running in a way that a lizard cannot emulate, precisely because of that simple hoof. Simplification is not degeneration.

As for the rest, I think the best we can hope to do is to emulate ants and cyanobacteria. Both "took over the world" in their time, and both are still around. One could argue that cyanobacteria (both free-living and chloroplasts) still run the world, and obviously, there are many more ants than people, despite our feeble attempts to control them. However, they have become the bases for entire ecosystems that depend on them. I strongly suspect that will be our long term fate too.

I'm rather less sanguine about machines parasitizing us, but as someone else pointed out, why do we have Facebook and Angry Birds, anyway? It's already happening, and we don't need the Matrix or even Blood Music for them to take our resources.

The symbiosis between the mitochondria ancestor and the cell (which appears to have been archaea) was win-win in that multi-celled life couldn't exist without them.
At the moment we are inside most of our technology and most of us would die if we were plucked from our cities and dumped in the wild.
However, with nano-tech, most of our technology will be inside us and I think the situation will then be reversed with the biological host moulding the technology.
I also believe this new human/animal/techno symbiote will enable true multi-brained hive minds to develop, but they will be lateral connections rather than top-down connections like insect hives. A bit like macrolife but even more complex and wierd. The era of individual humans is almost over, but what's coming should be better.
By the way does anyone know if Charlie is arriving at Swancon in Perth Australia on Friday or Saturday

Ideally, of course, I'd like to live with a mature nanotechnology that works in water, at room temperature, using locally available atoms and energy derived mostly from the sun.

Oh yeah, that's right, we do--it's called the biosphere. It's also 4.5 billion years old.

The truly awkward part about most nanotechnological speculation is that it forgets that biology is nanotechnology. Reinventing a cell would be humorous, if it didn't involve so many horribly toxic mistakes, starting with the "oops, we didn't realize that the size and shape of the nanoparticles mattered, we just used simple chemical formulas to measure toxicity." It's this awkward mix of abysmal ignorance and towering arrogance that ultimately gets me, and I do hope the speculators grow out of it, preferably sooner rather than later.

I suspect he's trying for the Friday, being one of the special guests. Looking at my Twitter feed, he was in KL 11 hours ago, so he should be in Perth shortly. (Or at least, at the airport, but that's not that far out as far as I remember.)

From a comment posted on Ultraphyte:In my mind, the pinnacle of intelligence as we understand it is the ability to produce AND to consume pieces of art. People worry about how computers and robots will take over their jobs with their relentless efficiency, but I anticipate that as artificial intelligence becomes more and more complex, it will serve as an equalizer between humans and robots because robots will not be satisfied with simple rote task production, but will also want time to enjoy movies and draw pictures etc. Highly complex robots may feel pain and may not be satisfied with limited warranty programs and may instead want comprehensive healthcare. As these changes take place, I think it will be harder to differentiate between robots and humans even as now in the 21st century it is becoming harder to differentiate different races and cultures of human beings.

Some instances are potentially misleading: I can see how Piet Mondrian's geometric paintings lose some of their content in the images we see in books or on the web. The brushwork doesn't show. But is that loss of signal significant?

And I just don't get Jackson Pollack.

A lot of the pre-modern art is both a remarkably crafted image of something real, with a symbol heavy coding that the viewer, the cultivated man of his time, can understand. Without the symbolism, we can still see the picture.

Some twentieth century art is all symbolism and little or no picture. Worse, the contextual information can seem to be entirely outside the image.

Could an A.I. that can pass a Turing Test be able to pass a Jackson Pollack test, and can we expect such a test to be useful? It just seems to be an opportunity to make plausible comments without any need for understanding.

I'm trying to wrap my head around your analogy (metaphor? simile?) and I think you might have it backwards. That is, the role of mitochondria seems a better fit to computers than to people. We incorporate computers into our lives, using their specialized abilities to boost our own. We keep them safe, give them the resources they need, even help them to reproduce.

Extending the analogy, if the advent of eukaryotic cells was a type of singularity, then humans using computers could also be a type of singularity, and yes, it's going on right now. If so, then what it will look like is that people will more and more closely integrate computers into themselves, until a new type of organism emerges. We will have added a new "organelle" and become something different (better, hopefully).

And don't think that mitochondria are degenerating. One way of looking at it is that the mitochondria is a supremely successful organism. For every eukaryotic cell, there are many mitochondria. They sit safe and protected, outsourcing their drudge work to the host cell and concentrating on what they do best. They help their host to be successful and reap the benefits. Pretty smart.

Are the mitochondria "successful"--that is an interesting question. You could argue they are among the most successful reproductive entities on Earth, in that they populate nearly all eukaryotes. But the same could be said of a retrovirus (HIV/AIDS). Retroviruses are universal, and their defunct gene sequences fill much of our animal genomes. They offer huge potential for genetic medicine--but--

Would a human being consider the existence of a mitochondrion or a retrovirus "success"?

What if we outsource our own consciousness and self-awareness to the machines, perhaps without realizing it? If we take too many drugs to "control depression" ete., while shoving our memories into the computer? There are neuroscientists now working on brain surgery to excise painful memories.

Personally I'm optimistic, as you suggest, that the human-machine extensions will lead to more exciting things beyond Jackson Pollock (if we don't burn out our planet first). But I also see the other road.

Ideally, of course, I'd like to live with a mature nanotechnology that works in water, at room temperature, using locally available atoms and energy derived mostly from the sun.

Oh yeah, that's right, we do--it's called the biosphere. It's also 4.5 billion years old.

Mechanical manufacturing (nanoscale or macroscale) can do many things biology can't, and vice versa. Making an organism to grow steel wool is as difficult as building a factory to make wood. Even though the atoms needed to make both items are abundant and widely distributed in nature, and even though both products can be made with solar energy.

It’s pretty clear to me that the mitochondria have been absorbed by the cell. Neither can survive without the other, but intention has been retained by the cell and it decides what to do with the energy the Mitochondria generate. It’s the same with humans and technology…Humans have kept control of intention. However, we are following one of the main tendencies of life and putting as many of our behavioural processes on automatic pilot as possible. (see Plotkin for a good description). Previously this meant passing off behaviour and skill sets to instinct, now it means delegating them to technology.

In presently occurring symbiosis between human and technology, intention and guidance is still held by the human host (I don’t believe technology as we make it now has inherent intention). Also since we are on the road to becoming an extremely long lived creature (Sorry Deathists and Environmental Malthusians, but you will lose) most adaptation will occur within a generation using technological add-ons rather than through the ancient selective filter between generations.

We will adapt and change on the run as a type of shape-changing techno-organism.

Robert: I arrived in Perth on Tuesday (and have been scarce around these parts due to the hotel wifi costing about 15 times as much as 3G bandwidth on Telstra -- it took time to get a SIM card for my mifi router).

Also since we are on the road to becoming an extremely long lived creature (Sorry Deathists and Environmental Malthusians, but you will lose) most adaptation will occur within a generation using technological add-ons rather than through the ancient selective filter between generations.

That's certainly a data-free assertion. Based on what I can see (e.g. this article), lifespan is increasing, but the developed world passed the inflection point before 1970, and since it's following a sigmoid, we're unlikely to get above 100 year lifespan on average. Ever. Note also that at least 40 years of said lifespan (0-10, 70-100) will be spent largely depending on others' care, and certainly in non-reproductive states that have nothing to do with evolution.

Contrast that with the oh-so-pleasant reading from the IPCC 4 about the types of mortality associated with climate change and global warming. These include malnutrition, infant diarrhea, cardiovascular disease, infectious disease, and death from heat waves, floods, and potentially droughts. Note that these disproportionally affect both the young and the old. In other words, there's no reason to be optimistic that we will become much longer lived than we are now. Furthermore, any age extensions may well come at a high cost in money and suffering for the individuals extending their lives, at least if they resort to modern medicine.

I'm guessing you're assuming that we'll be happy and healthy until killed by an accident. It's more likely to be that you're stuck in a wheelchair, in the cheap apartment that you can afford, sweating through your hundredth hot summer and praying that the local aid volunteer will get you to a cooling center in time, because you can't get yourself there any more.

To add to Heteromeles's post: I work in regenerative medicine and I'm very optimistic about this fields potential to radically improve quality of life in the future. But I'm not optimistic to the point that I think the future will be full of 100 year olds with 20 year old vitalities, that future seems nowhere on the horizon.

Reason being that aging is a hugely complex process affecting every system in different ways. There might be treatments for many age related diseases that allow people to live longer and extend healthy living but sure as anything this will reveal another nasty slew of diseases that were kept low or hidden by other conditions taking precedence. In other words even if you extend life expectancy (hypothetically) to 120 and healthy middle age to 80 there are still decades of life that will spent in declining health requiring permant care.

This may or may not be one day solved, it's impossible for anyone to say. It could be that with mature fields like regenerative medicine and a complete understanding of the human -omics (from our genomes to our microbiomes via our metabolomes) that keeping people living very long and healthy lives will be possible. But equally it might not be true, there's nothing to indicate either way.

As you can probably guess, I'm a supporter of the adage "anything that isn't impossible will eventually happen, and even the seemingly impossible has a way of happening sometimes".

Eventually can be a very long time. It seems like the materials were available for someone to develop the voltaic pile during the days of the Roman Empire. But it was not actually developed until more than 1000 years later*. If there's no aging cure discovered this century we may not know whether the failure is more like the Romans failing to discover batteries or alchemists failing to turn lead into gold.

Ryan, I agree with your assessment of current work on aging. In general, though, isn't a "mature field" one that's awaiting the next unexpected breakthrough? Perhaps the unexpected breakthrough, in this case, would be the mapping of human brains onto silicon devices. Or perhaps it might be our ability to reprogram adult skin cells into tissues that grow new organs.

While the timing of an unexpected breakthrough cannot be predicted, perhaps on a larger scale one can predict (like earthquakes) roughly how often such breakthroughs will occur.

Another important point we haven't noted yet is class difference. The USA is steadily pulling apart between the unemployed/uneducated underclass, versus the educated/wealthier-than-ever upperclass. Most of the advances we talk about are available only to the upperclass. I think Europe is starting to look more like this.

Making an organism to grow steel wool is as difficult as building a factory to make wood.

Steel, and most metals, develops a thin, tenacious oxide on exposure to air and water. Nanoscale metals tend to oxidize rapidly in the terrestrial environment (gold and platinum excepted).

I used to work for a company that made iron nanoparticles. They were supposed to be sealed under nitrogen in glass jars, but quality control was poor. Every now and then a kilogram of nano-iron (several person-weeks of work) would flash into useless rust.

Jay @ 40
You might have been lucky there.
Under the right conditions ( & not too difficult to achieve, either )...
Iron burns in air/oxygen, very violently.
Google for "Thermic lance" (or thermal lance as the case may be) for illustrations!

Not even that exotic. You need an oxy-acetylene flame to weld steel, but oxy-propane is quite hot enough for cutting. The flame gets the metal hot enough to burn in the oxygen jet, which generates enough heat that a lot of metal in the cut is melted and physically blown out.

Wear wellington boots and cotton overalls, and have the trouser-legs of the overalls outside the boots.

As you can probably guess, others of us are of the school that there's a big difference between "not prohibited by the laws of physics" and "logistically possible." If it would take the energy output of South Korea to send one man on a one-way mission to Alpha Centauri, a lot of people (all the South Koreans) are quite right in questioning the logistics of wasting their resources on starflight. That's logistics, and it's as important as physics. Resources are rather more limited than the laws of physics imply.

I'd also point out that, right now, the really difficult problems have to do with politics (not just Washington, but with all human interactions), and that we've seen massively less innovation in political problem solving than we have in physical problem solving.

Global warming is a case in point. Technologically, it's a solvable problem, and in fact it's a largely technologically solved problem (Rather than derail the thread, I'll post references if you don't believe it). All of the hold ups (and they are huge hold ups) have to do with politics, and that's why we're looking at the worse version of global warming right now.

In other words, if you want to solve the most difficult real-world problems, start working on new ways to get people to work together and make difficult choices.

The iron particles did burn rather violently, or "flash" as I put it. The "luck" was that at the time several person-weeks worth of iron nanoparticle production was about a gram of iron nanoparticles in a rather large jar, so the flash was contained by the glass. The less lucky bit, of course, was the economic loss.

The least lucky bits involved management, which was a truly remarkable set of individuals. Their slogan should have been "Reinventing the triangular wheel" or perhaps "Safety? We don't need no stinkin' safety!".

heteromeles @ 44
You raise a very sore point.
There are a large & vocal minority still refusing to touch Global Wrming at all, many claiming that it is a giant scam or con-trick.
Unfortunately, they have half a point, when guvmint policies, all claiming to be "green" of course, appear merely to be a money-making exercise, rather than actually addressing the real GW problem(s)
Classic example in GB is wind-power, which in terms of reliable & large-scale enrgy-production is utterly uesless.
As for money being invested in nuclear or large-scale tidal - which would relly help solve the problems, if not quite de nada are pretty piffling.

There IS thus a "GW scam" but it's a guvmint revenue-raisng one, irrelevant to the real problems we face.

I don't disagree. The big question is who pays and why, and being an American, I can easily say that we're not good at paying our debts, whatever our rhetoric says. The key point here is that neither global warming physics nor even the engineering are at all insoluble. Global warming is now mostly a political problem. As such, it's an unfortunate reminder that physics does not dictate what is and is not possible for people. It merely sets some boundary conditions.

As for references, I'd suggest http://skepticalscience.com/ for the user friendly version. I've also been downloading reports from the IPCC website (http://www.ipcc.ch/), but for whatever reason the IPCC 4 technical report is refusing to download except in pieces.

Joan: gene migration can indeed go all the way -- but if it goes all the way, the things that are left no longer respire: hydrogenosomes and mitosomes are nonrespiratory bodies. Mitochondrial DNA in the mitochondria is crucial for respiration: if you need the things to respire, there is some DNA that cannot move to the nucleus -- the obvious stuff like tRNAs, but also parts of the respiratory chain.

Nobody yet knows quite how the mitochondrial DNA gets in, nor how common it is: the case I know about was only discovered because the patient's dad had a mitochondrial disease, and when his offspring had it too it was fairly obvious that something odd was going on. Several percent of the population could have this, or it could be one in ten million. Nobody knows.

roberth2309, it's pretty clear to me that the cell has been overpowered by the mitochondrion. The cell starves for lack of energy in minutes without the mitochondrion; crucial pathways are routed through it; and it even determines whether the cell shall live or die (a major initiator of the apoptotic pathway is to rupture the mitochrondria). The mitochondria get unique coddling at critical times of the life cycle (e.g. in immature oocytes), and are generally treated like the more important partner... oh, and there are many more mitochondria inside us than there are eukaryotic cells.

Mitochondria rule: we are just their slaves.

It is not meaningful to say which has 'retained intention'. Cells do not have intention to retain.

I was looking for references that said "Technologically, it's a solvable problem, and in fact it's a largely technologically solved problem" or something similar. I didn't read that whole 1088 page IPCC report, but a quick skim left the impression that they show there's plenty of energy in the envionment, but gloss over the inherent difficulties and inefficiencies of gathering energy from sources that are orders of magnitude less concentrated than fossil fuels.

That's the political problem in a nutshell. Technically, it's certainly possible for the entire current world population to live with energy that's at least an order of magnitude less concentrated than is oil. One could argue that, when you strip away the massive subsidies, add the environmental costs, add the costs of storms and droughts, and add the costs for the militaries necessary to gain oil and defend the supply lines...yes, all that...that it would be cheaper and more beneficial for people to go to wind, solar, hydropower, and geothermal ASAP, purely from a humanistic and economic perspective. For example, there's no study showing that human happiness is related to energy use, above the basic level needed to provide sufficient food and a reasonably comfortable environment. It's pretty evident we could do that with renewables.

If you're screaming and pounding out a pungent response, let me point out this is the essence of the political problem. I live in an oil-fueled society, and switching to 100% renewables would cause massive dislocations for me and everyone I know. I don't have the money to make the switch by myself, and I have no way of borrowing it.

This is also a political problem. Those, like me, who are causing most of the problem are perfectly comfortable with the way things are. Our selfish behavior is going to result in the death of the culture I currently know, likely 100 meters of sea level rise (flooding almost every city I know), radical climate change to something that will look like the early Eocene after going through a super-arid time, and this heat wave will last roughly 50,000 to 100,000 years. But I'll be dead, so why should I care? There are something like 7 billion people who would have a better life if I did act, not counting unborn humanity, but that, so far, hasn't been sufficient to make us all want to change.

Agreed. I tend to use a very broad version of the term politics. Yes, it tends to be associated with official governance if you read the dictionaries, but any dispute between groups of people seems to be denigrated as "politics" right now, and I can't think of a better word to describe how issues related to human disputes seem to be more intractable than issues related to technology. Perhaps this is a symptom of living around the time of a singularity? If so, it certainly does not bode well for technological advances actually solving any of our pressing problems.

Separate issue, from Joan's original post. If I recall Sapp's Evolution by Association: a history of symbiosis, Dr. Margulis didn't originate the theory of endosymbiosis. Versions of that theory go back to the 1920s and 1930s, because a lot of people noticed that organelles look like bacteria. She simply formalized it and demonstrated that it was correct.

As for endosymbiosis being a New Age theory, au contraire. The real problem with symbiosis (especially the associated concept of mutualism) was that it was tied early on to communism. The concept of mutualism (two organisms that benefit each other) comes from the old mutual aid societies.

Symbiosis was seen from early on as a communist plot, while survival-of-the-fittest Darwinism was seen as nature supporting capitalism. This ideology still permeates our society, sometimes to ridiculous extremes. Even our laws allegedly favor open competition over close cooperation, despite the fact that the biggest corporations do very well with their symbiotic complexes with various governments.

To pick another example, general ecology textbooks devote a chapter to competition theory (even though it's always been hard to demonstrate in anything other than very controlled conditions, such as flour beetles). Most textbooks devote a few pages, if any, to symbiosis, despite the fact that essentially all multicellular organisms have some sort of symbiotic relationship with something else. Students are lucky to get a single lecture on it.

Evolution doesn't favor competition any more than it favors symbiosis. It favors leaving behind offspring, through whatever strategy works. That doesn't stop anyone from co-opting it to lend spurious support to various and conflicting ideologies.

It is not meaningful to say which has 'retained intention'. Cells do not have intention to retain.
I admit a single cell appears only to react to environmental signals...go this way-glucose- mmm. Turn back-too much acid-ow it burns.
One must remember that these reactions were selected over many generations for their survival value. And since intention can be defined as acting in order to achieve an end, I don't see it as an error to say that a cell acted in a certain way in order to survive and flourish/replicate. The host cell "sees" the outside world and reacts too it, the mitochondrial remnants who are doing all the hard work in the fields and dying off like flies only see what the host wants them to see.
To be politically incorrect, It's almost like "benevolent slavery", the mitochondria work themselves to an early death to make energy that the cell can't make. Also, all the signals triggering mitochondrial replication and apoptosis seem to originate in the cells nucleus. Though it is true ,that the cell can't survive without its mitochondria, so it needs to give them enough to eat and if they are good, an opportunity to breed.
You decide who is ruling whom.

Since Margulis is now firmly in the conversation, does anyone have any references as to whether her last idea that organisms that metamorphose are in fact merged genomes from different organisms? I keep an eye out for genomic studies that might shed light on this, but I have seen little so far. I did see some "gruesome" video of sea urchin development on the Beeb recently, and it certainly looked like the adult organism doing a "chest burster" out of the juvenile. Not very integrated development. The insects seem to have refined it far better.

Err, somewhat the other way round, but there is a long-standing discussion with the chordates about the primitive condition; the two possibilities somewhat boild down to tunicates (free-swimming larvae, metamorphosis to sessile adults) vs. cephalochordates (always free-swimming).

Well, but if we're talking about the cell nucleus, this also includes the mitochondrial genes transferred to it, blurring the picture even more. You might call these genes "hostages" of the cell nucleus, but than, mutation rate for mitochondrial DNA is more than two orders of magnitude higher than the one for nuclear DNA, so there is some pressure to move important genes to the latter.

As a gene, where would you like to go today?

In a similar vein, if we upload human minds into the technosphere, will these minds decide like human or like machines, if there is any difference between the two?

More generally, mitochondrias reproduction is tied to the cell, and technology is dependent on humans for reproduction, while humans are somewhat independent of specific technologies. But at the moment, this is somewhat changing, which some humans unable to reproduce due to hereditary reasons reproducing with IVF. Or cesarean section being in widespread use. So in the long run, we might become somewhat dependent on those technologies.

OTOH, if you're into meme theory, we are already living in a symbiotic (e.g. both parasitic and symbiotic) relationship with the memosphere, where tools, a subset of the memosphere, are also a subset of the technosphere. And in some ways we already constrain our reproduction with regards of the utiliy of said offspring to memes. Ask people why trisomy 21 is so bad...

Alex:
Sorry, the idea of Margulis that multicellular organisms that metamorphose have merged genomes from different organisms does not hold up. It's a neat idea, but all the evidence points the other way. For instance, the imaginal disks of a Drosophila embryo clearly map onto the parts of the adult fly.

In real life (unlike most science fiction) the false ideas outnumber the good ones about a hundred to one. I tried to make this point in THE HIGHEST FRONTIER, where the plant biologist goes through all kinds of wrong-end ideas. And the right ones are usually right by accident.

More specifically, he's usually filed unter anarcho-communism, which might differ somewhat in outlook but might also overlap with the more commonly mentioned anarcho-syndicalism. Which is not that inimical to individualist anarchism as one might think, since the individual needs a society and the society has to be individualistic, though some individualist anarchists like Tucker are somewhat close to what one would call "libertarian" in the US today. Where, incidentally, those are still somewhat different from today's "anarcho-capitalists".

BTW, once upon a time I promised some Dawkins or Wilson (along with Kropotkin's "Mutual Aid") to our local squatters. Maybe I should ask them if they're still interested. ;)

heteromeles @ 52It's pretty evident we could do that with renewables.
Really? Sure about that?
Wind is a completely busted flush. Tidal, in the UK could be really big. I think you are still going to need heavy base-line power-generation - which means nuclear, when it comes down to it.
THEN put the renewables on top of that base-line & you are away, no problems at all.

Trottelreiner @ 59
Euw.
You've reminded me of a very nasty SF story (based, I thnk, on that idea) ... damn, can't find it right now: I think it was by Katherine Mclean, in the collection "The Trouble with you Earth Peole" originally published in a long-ago"Analog" [ Was it -"The Origin of the Species" ?? ]
... & @ 60
After Kropotkin's funeral, Dzhersinsky & Stalin had been watching who the chief mourners were, and made sure they vanished into the camps ....

He reminisced how after the arrest of some Makhnists including himself in 1921, the authorities asked Trotzky what to do with them. Answer was something like "Execution by firing squad". Luckily for them, there was a congress of trade unionists in Russia at the time, the prisoners went on hunger strike, and the congress got them released. He left the soviet Union and died September 1945 in Paris.

Where, well, the Anarchist were no saints, look at the PR debacle that is "propaganda of the deed"

As for Katherine MacLean, after some googling, "The Origin of the Species" is about a "feeble-minded" boy whose brain is rellay more complex than normal brains by having two extra lobes. Told by the neurosurgeon, cutting those off makes him normal. Is this the story you were thinking about?

With regards to the idea that some disabilities might just be the next step in evolution, well...

Interestingly, it seems like recent gene duplication had some role in the hominid evolution

Lately, I've been wondering if humans have a somewhat higher rate of such events than other animals, maybe just some accidental glitch that led to our fast evolution, or even something selected for, e.g. a higher mutation rate for specific genes to cope with the ice ages.

Or some kind of mechanism to create behavioural freaks to fulfill some roles, e.g. dispersal or caring more for siblings, with phenotypes similar to hyperactivity or homosexuality, though that one might be more like a common genetic susceptibility, rare environmental stimulus:

Another thing is according to the "sessile urochordates are more primitive, cephalochordates and craniates are cases of neoteny", we might just be missing some stimulus to finally grow up, like an Axolotl injected with tyroxine

So if you feel compelled to attach yourself to some surface with your head with some adhesive secretion, tunicate larvae use some some organs ventral from their mouth, you are free to use your tongue and spit, enlarge your pharynx and lose your extremities, spine and nearly all your brain, call me. You might just have reactivated a developmental program more than 500 million years out of use. ;)

When we are at developmental biology jokes, hope you don't start kissing with your "first mouth", as my prof used to say...

welcome back, joan. It's a pleasure to read you, and you will be happy (I think) to know I order your "high frontier"in my bookshop today.
It may be a little out of the current talk, but have you ever thought of demons ?
what are demons ? somethings nearly human whom you pay with blood or soul and which reward you with nearly super human powers : gold, information, power. The interesting thing is that you give them something (and something valuable) to access knowledge... now can you tell me when was the last time you rememberded a phone number ? My guess is that you didn't for ages ! you asked your demon /i-phone/computer to remember it for you and you accepted to become a slave of their memory ... didn't you trade your memory/freedom/soul for knowledge ? Oh, yes you did ! ... and so did I.
I am completely afraid of your mitochondrial story as it remembers me of Hegel's slave and master and the history of power (and religion and demons of course).
Now to come back to the main talk could you tell us what you think of power ? the ones that are, the one that be, and the one that could ?
I'm very intersted about reading what authors think of power and politics.
thank you

Trottelreiner
NO
Wrong story - wrong remembering (Still can't find bloody book)
It was about a species, apparently normally humanoid, that, under the "right" trigger-circumstances, became a sessile, unintelligent "plant". Euw.

There was also, IIRC an either Conan Doyle or earky H G Wells short about human neoteny, where Lord whatsisname had had the neoteny drug - was still alive @ age 156 (or whatever) ... but he was halfway between a Chimp & a Gorilla!

Wrong story - wrong remembering (Still can't find bloody book)
It was about a species, apparently normally humanoid, that, under the "right" trigger-circumstances, became a sessile, unintelligent "plant". Euw.

The Katherine MacLean story about a humanoid that reverts to an unthinking plant-like being is Unhuman Sacrifice. See this summary by "jennre". Kingsley Amis, in New Maps of Hell, wrote of it: "I take this, not too fancifully I hope, as a justifiably horrifying little allegory of what you can do to people when you interfere with them for their own good". He was discussing it as an example of SF's hostility to religion, and also as an example of the biological-puzzle class of story: how not to harm aliens when you don't understand their life cycle.

Since you mentioned false ideas vs true ideas, I do have a nightmare that our increasingly proliferating memes will result in our civilization failing due to escalating noise (untruths) vs signal (truths). If I understood Michael Shermer's point that the [individual] cost of error checking is so high, the logical conclusion is that bad memes (wrong and cheap to manufacture) escape being selected out.

By the mitochondrial analogy: Mitochondria have a ten-fold higher error rate than nuclear genomes; but their worst errors go out quickly, because a cell that can't respire is doomed.

With respect to memes, maybe I'm optimistic but I still think that erroneous memes eventually drown the generator. The US conservatives are finding that out today; their own memes have boxed them in. It's true that memes go off on their own, but it's also true that the Internet provides much faster correction devices than ever before. So which wins?

And I just don't get Jackson Pollack.
...Could an A.I. that can pass a Turing Test be able to pass a Jackson Pollack test, and can we expect such a test to be useful?

I can think of three things that Pollock was probably doing. Trying to evoke ideas and feelings without depicting objects. Trying to dredge up stuff from his subconscious. And experimenting with form, colour, texture and so on without being constrained by the rules of perspective and the need to depict objects. I've not read much that Pollock wrote, but these are some of the reasons that artists do abstractions.

It can be difficult to paint or draw a design that looks balanced, harmonious, aesthetic, or whatever complimentary adjective one wants to use. Achieving this requires skill even for completely abstract designs — it's not, as some people say, something that a child, or a chimp, could do. In her book Why a Painting Is Like a Pizza, Nancy Heller compares doing so with arranging the components of a pizza topping on top of the pizza in a way that's attractive. Anyone who has done this has experienced solving the same kinds of aesthetic problems as occur in some abstract painting.

As far as the Turing test goes: the A.I.'s brain would probably be excellent at visual pattern recognition, and is likely to be stocked with images of trees, flowers, mountains and other things that the A.I. needs to be able to talk about. (Which is, potentially, everything.) I think it's likely that many abstract paintings and drawings would trigger this pattern recognition, because they contain parts that look somewhat like trees, flowers, mountains, and so on. And from there, the A.I. might be able to retrieve information about these objects, including the emotional responses they usually evoke in people. So yes, I suspect it would pass a Jackson Pollock test.

"Giving a damn" is not essential to life. Trees and bacteria do just fine with no discernible motivations. This isn't to say that trees and bacteria don't respond to their environment, just that their behaviors, such as they are, are mechanistic and lack intent.

The vast majority of the universe has no particular reason to exist, and is apparently unbothered by the lack.

But their behaviour, though mechanistic, is for a reason. This is how they differ from non-living material systems which just react irregardless of whether that reaction is good for their survival or not. Intention for me is "reaction in order to...." and I also think this defines the border between life and non life.
Bacteria and trees react to the environment in order to survive and procreate, stones don't.

A natural (?) metaphor for people as mitochondria is surely as employees of companies. It's not hard to imagine companies eventually providing all sorts of things beyond mere health care to some of their employees, such as enhancements which might not work as well when not connected to the company network, and you'd be well on your way to losing your independence. OK, there's no analogue to the original symbiosis which produced the first eukaryote, and companies are more like multicellular animals in the way they permit specialisation of tissues. But still.

It's pretty evident we could do that with renewables.
Really? Sure about that?
Wind is a completely busted flush. Tidal, in the UK could be really big. I think you are still going to need heavy base-line power-generation - which means nuclear, when it comes down to it.
THEN put the renewables on top of that base-line & you are away, no problems at all.

Depends on what you mean.

If the question is, can 9 billion people live comfortably on renewables, then yes, absolutely we all can.

If the question is whether a bunch of SUV-driving right-wingers can live on renewables, then no, because they would consider it a "huge loss" to be "forced" to live under such a "left wing socialist system." (the quotes are because I'm putting words in their mouths, not as scare quotes).

As for wind and solar being a busted flush, please, PLEASE remember that the energy sector is hugely politicized. I've seen so much spurious claptrap masquerading as "death of renewables" energy news that I suspect the same people who worked on the 2012 political campaigns are being paid to churn out propaganda for Big Oil. Read it all skeptically.

Remember, they have a lot of money, and a lot to lose if people turn away from oil. They are also trying to lure suckers in to buy oil lands for fracking (they get the oil, the landowners get stuck with the cleanup and damages). They have no reason to present themselves in a fair and balanced light, and they do not.

This is not to say that I'm a fervent believer in Big Solar and Big Wind. I've got problems with them too.

What I am saying is that you've got to read skeptically when checking the news about any of the energy sectors. If you're genuinely interested in what's really going on in the energy sector, it's worth diving into places like Chemical and Engineering News, or straight-up business journals, to try to get a sense of what wind and solar really cost, what operations are coming on line, and so forth. If you can get away from the spotlight, you can get some halfway decent information.

If you take a look at page 35 of that huge IPCC report (figure TS.1.3), it shows "renewable energy" at 13% of human energy use, with fossil fuels at about 85% and nuclear at about 2%. Of that 13%, about 12.5 is "bioenergy" (mainly firewood used in the third world, with corn ethanol and similar added in) and hydropower (good as far as it goes, but hard to grow because there aren't many good places left to dam). Solar, wind, geothermal, and tidal added up are less than 5 parts per thousand of current (2008) energy use.

Renewable energy sources are diffuse, and diffuse energy is much less useful than concentrated energy. Turning the energy in dynamite into ocean waves is very easy and efficient, but turning the energy in ocean waves into dynamite is inherently difficult and inefficient.

No it isn't. Unless by "reason" you mean the most negative "reason" imaginable: that such organisms' ancestors did things that way too and survived long enough to reproduce. Other organisms that did things differently maybe didn't survive, or did other things differently, including the fine detail of their developmental structure, such that they may be classified as a different species.

Organisms aren't designed, have no function, and no goals: they simply exist and reproduce, and the ones that don't reproduce don't give rise to offspring. As we're all the descendants of those organisms that managed to replicate successfully for the past 3.5-odd billion years, this has shaped our properties ... but there's no supernatural agency at work.

Do we need all that energy to live our lives? No. We could theoretically cut back tremendously and still have a pretty decent life. I haven't done the math yet, but I suspect that if everyone on Earth lived on the median per capita energy budget or lower, we could get a lot closer to making a living entirely on renewables.

For political reasons, we're not going to make such cuts under anything but dire stress.

The difference is that we can't meet our current energy needs, because those needs are predicated on infrastructure that requires large amounts of cheap energy. To go renewable, we have to change that system. This is absolutely possible in an engineering sense, but in a political sense, it's a nonstarter.

That table is a little misleading in some ways - the top consumer being Iceland is due to the fact that they run almost entirely on renewable energy, a mix of hydro and geothermal. They have so much renewable energy that they export it. But running a submarine electrical cable down to the UK (their closest plausible customer, about 700 miles away) hasn't yet been done, so they export it in a different form - aluminium ingots.

If you drive along the Icelandic shoreline, you may well come across one of the smelting plants. There'll be a small access road for the staff on the land side, but on the shore side there will be piers where ships bring in bauxite and other ships ship out aluminium ingots. And there are massive power lines coming in, which bring the electricity that's used to turn that bauxite into aluminium metal. The country is only just outside the world's top 10 aluminium producers, managing to produce about 40% as much as the US does despite having about a population about three orders of magnitude smaller.

It's only misleading because you expected it to be about oil. Additionally, you can rearrange the table in several ways but it doesn't invalidate the point. To be fair, we should point out that Canada has a slightly higher per capita energy consumption than does the US.

That still doesn't invalidate the central point. Back in the dim mists, this thread started because someone trotted out the old chestnut that anything that is physically possible will inevitably come to pass, due to human progress and ingenuity.

In this case, there's no physical reason why Americans can't slash our per capita energy consumption in half (if not by 75%) switch largely or entirely to renewables, and save the planet approximately 400,000 years of acidified oceans, melted ice caps, and all the other stuff that comes with the amount of CO2 we're emitting.

As we all know, there's no real political will to make any cuts at all, and so nothing is happening. Indeed, we have Greg talking about how wind is "a busted flush," and arguments about whether to pipe Canadian tar sands oil through Louisiana or truck it to Vancouver to ship it to China. I'd suggest this is a really good example of why the really difficult problems right now have to do with politics, not with engineering or physics.

No, I didn't expect it to be about oil. I expected it to be about consumption, and the case in Iceland is that they're not consuming it themselves, they're exporting it as much as they can. It's because their method of export involves embodying the electricity that it is being counted as consumption.

Were they shipping in batteries, charging them up, and exporting them, I reckon the figures would be wildly different, that the recipients would get the rating for the battery usage. But because it's aluminium ore rather than lead sulphate they're running the electricity through, it's different.

It does show that it's harder to work out the real figures. In the EU, carbon emmissions are lower than they would have been. But that's because the heavy manufacturing is now in the Far East. We're consuming as much in terms of goods, and the production is having the same energy input and the same carbon output, but it's no longer on our slate.

My point is that there is a physical reason why that won't work, or is at least very questionable. That reason is the second law of thermodynamics and its corollary Carnot's theorem, which states that the maximum efficiency of a heat engine is limited by the temperature differential between the heat source and the heat sink. In general this means that diffuse energy sources can be harnessed only very inefficiently, which is a good reason to be skeptical of claims that renewable energy sources can be scaled up.

Neither is wholly right, nor are they wholly wrong. And we can trace back a causal chain a heck of a long way. And then we hit a brick wall. Everything goes back to the Big Bang; space, time, and everything. But why the Big Bang, producing a universe with the characteristics it has? Can we ever come up with an answer that can be tested by science?

At that point, fiat lux is as good an answer as any other. We have, as a species, been using Gods to explain the inexplicable for as long as we have been explaining things, and we keep finding more useful explanations. We do crazy things, such as the Birkenhead Drill and we invoke a deity, but such ideas as the selfish gene emerge.

I don't know if I will ever live long enough to exclude the big-sky-fella from the Big Bang. I'd rather have an explanation that gives a useful prediction. Reason alone can give us the Higgs Boson or the Spanish Inquisition.

Actually, rather more than 7 billion people disagree with you. Unfortunately, we're all fed off a diffuse and unreliable power source called the sun, which runs through photosynthesis to make the food we require every day to live.

So no, you're wrong in general and in particular. The question simply is how little energy we're willing to spend to call it a good human life. That is a purely political question. The second law of thermodynamics never comes into it at all.

For example, read the old anthropological literature about, say, the !Kung, the Inuit, or the Hopi, to pick three extreme examples, and all of them would say they had mostly "good lives" by their own standards.

Modern studies say much the same thing--if people have friends, family, sufficient food, sufficient water, and a set of stories that tell them what a good life is and how to live it, and they'll be as happy on average (if not more happy) than a guy living in a McMansion in a suburb with his wife, no kids, who's paying down a half-million dollar mortgage on a house now worth $300,000, who drives an oversized truck (his boy toy) that he can barely afford to keep fueled on the dead-end job he has. That guy is using as much energy as a whole band of !Kung or Inuit hunters did a century ago, but he's less happy than any of them.

Energy use is not a good life. There's no correlation between energy use and human happiness once we get past bare subsistence, and there's no physical reason why Americans and other massive energy users can't scale back and still have a good life. There are massive political reasons we can't do that.

The fascinating thing, as you and others demonstrate, is that these political problems are inevitably framed as physical reasons why we can't do the right thing.
I really should point out that invoking a higher power to explain failings is also classic human behavior, but whether you invoke God, thermodynamics, or Darwinism, it doesn't make it right.

1) When you consider the source of our fertilizers, what we eat is substantially natural gas.

2) Please do not think I'm saying we don't have massive political failures. We do. I'm saying that if we magically solved our political problems, or even better if we had done that a generation ago, it remains an open question whether we could get renewable energy going at a sufficient scale to keep everyone alive.

3) Your position seems to be that the engineering problems are provably solvable and only the political barriers are apparently intractable. My position is that the political barriers are apparently intractable, but also that the tractability of the engineering problems is an open question. I honestly don't know if we could scale renewable energy sources to the point where we could use them exclusively to support seven billion people. I don't know, for example, how we would design a renewable system to keep Toronto habitable through the Canadian winter with anything like its present population.

At this point I think we understand each other, we just have different intuitions about some quantitative engineering matters that neither of us is an expert in.

My point is that there is a physical reason why that won't work, or is at least very questionable. That reason is the second law of thermodynamics and its corollary Carnot's theorem, which states that the maximum efficiency of a heat engine is limited by the temperature differential between the heat source and the heat sink. In general this means that diffuse energy sources can be harnessed only very inefficiently, which is a good reason to be skeptical of claims that renewable energy sources can be scaled up.

Hydroelectric generators, wind generators, and solar photovoltaic generators are not heat engines. The most efficient solar cells can convert 44% of incident optical energy to electricity, which is on par with supercritical coal plant efficiency and more efficient than a commercial nuclear reactor or typical automotive engine burning petrol. The entropic quality of light as a potential energy source is related to its wavelength more than its power density.

Carnot's theorem results from the application of the laws of thermodynamics to heat engines. When the laws are applied to other sorts of energy harvesting devices, the results are different in their specifics but broadly similar.

I used to work with solar cells. On our best day we could see 37% efficiency for a standard (AM1) spectrum, at the peak of the power curve, for a cell about 1 cm x 1 cm. Add in resistances of all the contacts to wire together a useful area of collection, inefficiencies resulting from the need to match the load impedance to the peak of the power curve, inefficiencies stemming from the fact that different cells have somewhat different power curves, the rather severe energy and material demands involved in semiconductor processing (these were high-end GaAs cells), and all the rest, and I consider it an open question whether solar power could ever be self-sustaining without fossil fuel inputs.

I'm not saying that I know it couldn't work. I'm saying that I don't know if it could work.

Hydroelectric generators seem to be by far the best performers of the renewable energy set, but also the hardest to grow. Most of the good dam spots are already in use.

I think that current wind and solar PV technology is basically "good enough" on a technical basis though financial viability varies widely with region. The conversion efficiency is high enough, the EROEI is adequate-to-good, and there aren't any critical mineral bottlenecks, despite what junior mining company promoters might say. The big remaining problem is storage. Current storage technology is probably good enough for tropical locations like Hawaii, where you mostly need diurnal buffering. It is still far from good enough for extreme latitudes that don't get much winter sun and could have weeks of inadequate wind. On the bright side, there are many more people living in the latitudes between Rome and Christchurch than there are living nearer the poles. But quite a bit of the world's high-emitting population lives north of Rome, and it will be harder to curb fossil fuel use with renewables there in the absence of storage breakthroughs.

I don't know whether anyone can derive the equivalent of Moore's Law for solar cells from these data, but the one I find most interesting are the organic cells at the bottom right corner. They're inefficient, but they also appear to be increasing in efficiency faster than most of the other forms ever did. I wonder where they will top out?

The thing about the organics is that IIRC these cells may be the easiest ones to recycle. Other cells are effectively e-waste, and recycling them is a chore.

Each solar cell type has its ups and downs. Gallium arsenide is most efficient, but gallium is not so abundant and the process gases are worrisome. Silicon is abundant but energy-intensive to make and the indirect bandgap reduces efficiency (it's a solid state physics thing). Organics are potentially cheap, but degrade relatively rapidly because oxygen diffuses into plastics (this is also, incidentally, the reason beer is not usually sold in plastic bottles). All of them have substantial costs and energy losses at the systems integration level (connecting the photocells to the grid).

As for whether any of them winds up being the technology of the future, we'll just have to wait and see.

whose eternal fate seems to be to remembered for what he got wrong and not what he got right. ;)

Now Aristotle is big on something he calls "telos", which might mean "purpose" or "end". Though in the case of animals, this might also be just something like the "form" of the animal. Which might tie in to Aristotle's theories about genetics. Err. Sorry, not really my department, and it's easy to misinterpret Aristotle. Might be one reason for his popularity.
So, well, if he talks about the "telos" of life, he might not speak about the purpose of life, but that life is in accordance with a certain form. Though than, this might be the same for him.

As already mentioned, Aristotle was wrong, but he was progress compared to some of his predecessors. Who, incidentally, might have gotten some things more right than Aristotle[1], but for the wrong reasons.

[1] Democritus is credited the concept of atoms, and he had some primitive notions on natural selection. Though Aristotle argued against Democritus' ideas about the latter:

Democritus, however, neglecting the final cause, reduces to necessity all the operations of nature. Now they are necessary, it is true, but yet they are for a final cause and for the sake of what is best in each case. Thus nothing prevents the teeth from being formed and being shed in this way; but it is not on account of these causes but on account of the end....
—Aristotle, Generation of Animals V.8, 789a8-b15