Posts Tagged ‘Dawkins’

A while back I wrote a post about vision and why it is that some things simply can not, even in principle, be described in visual terms. I focused (see how hard it is to avoid metaphors of sight?) on things smaller than atoms, but I didn’t need to go that far. Right now, you are reading these words through at least several inches of air – real-world, macroscale stuff that you are able to feel or hear when it moves, but are unable to see.

Transparency is something magical. As a child I was fascinated by glass: solid, hard, heavier than water – and yet invisible. I asked how this could be possible, and was never really satisfied with any answer I got. And it turns out this is because I was asking the wrong question. It turns out that glass’s seemingly magical transparency is not the phenomenon demanding an explanation. To gain the deep understanding I missed as a child, we must consider the origin of opacity.

Ranked in order of wavelength, the electromagnetic spectrum begins with radiowaves and continues (decreasing wavelength) with microwaves, the infrared, the ultraviolet, x-rays, and gamma rays. Note the omission: I have deliberately excluded visible light. Why?

The portion of the electromagnetic spectrum that we can actually see is vanishingly small. You could blink and miss it, though of course if you blink you do miss it. Visible light – colour – is an astoundingly narrow selection of the available wavelengths between infrared and ultraviolet. One might wonder why this particular chunk of real estate, between 390 and 750 nm, happens to be the one that we can see. And if you ask it in these terms, you are still asking the wrong question.

Recall that you “seeing” something corresponds to your brain detecting a chemical change in a substance called 11-cis-retinal in your eyeball. 11-cis-retinal only absorbs radiation with wavelengths between 390 nm and 750 nm; anything outside this range has no effect, and so is invisible. So this is why only some of the light gets “seen”. But this only pushes the question back one step further. Why do our eyes employ 11-cis-retinal, and not some other chemical with absorbance in another wavelength range?

We can narrow the possibilities using an understanding of chemistry. There are no known chemical compounds that undergo a chemical change on exposure to radiowaves. This means that no organism dependent on chemistry as we know it could ever treat radiowaves as its own personal “visible light”. The same appears to go for microwaves, though this is contested. X-rays and gamma rays do cause chemical changes in molecules, but with wavelengths such as this it would be quite a challenge to evolve an eye that could handle them (an essay by Arthur C Clarke suggests an animal with a metal box for an eye and a microscopic pinhole to focus it, but only to illustrate the difficulties involved). So from the restrictions of photochemistry we’re limited to a window about 3500 nm wide available for seeing – and yet evolution has caused us to see only a fraction of that. Why? And why did it “choose” for us the wavelength range that it did?

Well, consider some possibilities. What if we saw in the range of about 100 to 200 nm? Chemically it’s possible. But no organism on Earth would evolve to see in that wavelength. Our atmosphere is 80% nitrogen, and nitrogen absorbs light at about 100 nm. If we saw in that range, air would not be transparent: it would be totally opaque. The ability to see in this wavelength range would be worthless, just as it would be worthless to see around 1450 nm, where water absorbs; we evolved from creatures that needed to see in water. Here is the answer to the problem of transparency, and the problem is revealed to be that the question was backwards. Air (or water, or glass) is not transparent by itself; it is transparent to us because eyes that don’t find air transparent would be of no use to us. The transparency of air is the result of the environment our genes have designed us to live in. Of course, a subterranean creature like a mole might welcome a design of eye that makes soil transparent – while simultaneously leaving worms opaque and visible. But the chemistry for that does not exist, and moles have to make do with being blind.

Practical considerations aside, it’s interesting to ask if X-ray vision might have been useful on evolutionary terms. If we saw in the X-ray region, most matter would be transparent to us, including our own bodies. This would be useful for some things, like spotting tumours or broken bones. But we would struggle to pick fruit, or detect approaching thunderclouds, or build tools out of wood. As a species, we are better off with the kind of eyes that can detect the chemical difference between an unripe fruit (green) and a ripe one (red). Evolution has selected for us a sense of vision that operates in the part of the spectrum that is richest in information relevant to our survival. Other animals make use of slightly different wavelength ranges, like bees, who prefer the shorter ultraviolet wavelengths rich in information about the availability of nectar in flowers.

In fact, it’s arresting to imagine an alien world, lit by sun that emits different wavelengths of light to our own – populated by aliens based on very different chemistry to our own, with strange eyes for detecting wavelengths we cannot ever hope to see. If ever they came to visit us, their children might well look at us in fascination, wondering why it is that we humans are as transparent to them as glass…

Arrogance is a common accusation levied at scientists when they declare that, actually, x is the case, and anyone who believes otherwise is simply incorrect. And, yes, there is an arrogance associated with any kind of absolute certainty, since there is very little, if anything, of which anyone can be truly certain. This is something that scientists know (or should know) better than anyone. Scientists are generally very candid about what they do not know, where their areas of expertise are and what lies outside it, and their claims are always tempered by error bars, confidence levels, and the fact that correlation does not imply causation; and that’s before you get down to the real philosophy of science stuff with the problem of induction, unreliability of the evidence of senses and so on.

Nevertheless, there are some things about which scientists’ feelings come so close to certainty that there isn’t much reason calling it anything else – certainty in the existence of atoms, or that the Earth is an oblate spheroid orbiting a main-sequence star, or that humans and broccoli share a common ancestor. The evidence of these things is overwhelmingly good, and anyone who believes otherwise is wrong.

But is it arrogant to say so?

The concept of arrogance is bound inextricably to the idea of respect: to be arrogant is to not respect another person’s opinions.

Now I’m just going to come right out and say it: some people’s opinions are pretty dumb. The idea that the Earth is 6000 years old deserves no respect whatsoever. But then, neither does the idea that the Earth is 4.5 billions years old. No opinion deserves respect, or protection from criticism.

But is disrespecting an opinion the same as disrespecting the person who holds it? Sometimes, if it’s not done properly. And here lies the meaning of arrogance.

Respect, as applied to an intellectual, means that, if this person says something that is totally opposed to your own opinions, you still listen to hear what she has to say. It’s tempting to dismiss people who say that trial by jury should be abandoned; but when Richard Dawkins says it, I sit up and pay attention, because I know he’s thought hard about it. I respect the man, and so I listen.

To respect someone means to assume that his opinion is founded on careful thought that is worth taking on board; it also means to assume that he is amenable to rational argument, and is not so inflexible that he cannot be persuaded otherwise, if he is wrong. One should always make this assumption, and frame one’s arguments as though to someone who will listen to them; if nothing else, it is good exercise. To treat one’s opponent as unreachable by logical discourse is arrogant in the extreme.

So next time you see a conversation in which one debater calls the other arrogant, ask yourself this question: who is showing the least respect? The one who is hears a deeply-held belief and demands evidence for it? Or the one who’s deploying the A-word as a get-out-of-argument-free card and hoping to stop the debate in its tracks?

Here at the S I we like nothing more than a nice glass of milk after a long day of studying. But drinking milk is, in evolutionary terms, a very strange thing to do.

Fact of the day: most of the peoples on Earth are unable to digest milk. Surprising, isn’t it? Note the ‘s’ in ‘peoples’; that is where the clue is.

All mammals* are able to produce milk, but humans are alone on Earth in drinking it as adults. The problem is the carbohydrate lactose, a disaccharide sugar formed by a reaction between glucose and galactose in the mammary gland. Lactose is unique to milk, being found nowhere else in the body. As such, it requires special chemical equipment to break it down and digest it.

Lactose is broken down in the small intestine by an enzyme called lactase, which, in most mammals, is produced only when the mammal is very young; production tails off shortly after birth. After lactase production shuts down, it is impossible to digest lactose in the small intestine; one becomes ‘lactose intolerant’. In this event, lactose digestion occurs downstream, as it were, in the large intestine, where decomposition by intestinal bacteria fills the gut with unwanted gas and water. Naturally, people with lactose intolerance tend to avoid milk.

But I can drink milk. If you are reading this, there is a good chance that you can too. What makes this possible? The answer is surprising.

The ability to drink milk is an evolutionary phenomenon, and it occurred extremely recently. Only ten thousand years ago, before the agricultural revolution, people with a tolerance to lactose were the minority. This made sense in the terms of evolutionary economy: since your mother will stop producing milk, why continue to produce a costly enzyme to digest it?

Nevertheless, there was variation. Some people continued to produce lactase a little longer than the others. This variation was meaningless noise until the domestication of the cow, when suddenly it became a real advantage. For the first time, people other than children were able to access the energetic and nutritional powerhouse that is cow’s milk.

In times of scarcity this additional food source was a matter of life or death. Children who were able to drink cow’s milk later in life were more likely to survive than those who stopped producing lactase early. That’s one hell of a selection pressure. Effectively, in those parts of the world where cows were domesticated, the lactose-tolerant outbred the intolerant. In short, we evolved.

‘Domestication’ is the process by which a wild animal becomes accustomed to an agricultural environment. We think of cows as being domesticated by humans, but what the story of milk tells us is that it works both ways. On a world map, the presence of milk-producing cows is tracked by a detectable change in human genetic makeup.

We domesticated the cows to produce milk more or less consciously. But without realising it, we simultaneously domesticated ourselves to consume it.

Selfish gene theory is a gene-centred view of natural selection. Your genome is made up of thousands of individual genes, each one of which has only one goal: replication. And since the resources available for replication are finite, the genes compete.

One way in which a gene might ensure it gets reproduced is to gang up with some other genes to make a body. This body will act as a temporary, disposable vehicle, to be thrown away once it has had children, but worth designing carefully. The genes try to make the body survive at least to childrearing age, by equipping it with sharp teeth or keen eyes. Genes have to learn to work together: a gene for light bones might do well to pair up with a gene for wings; genes for gills and flippers go hand in hand. All of the wonderful good design of life comes from genes cooperating to compete ­– building bodies that enhance their chances of replication.

But even within a body, the competition between genes is still going on.

Imagine your genome as a sequence of letters, T, C, A and G – 2.9 billion letters arranged in a line, maybe on a tape of paper. Every generation, this piece of paper gets transcribed and copied. Sometimes, mistakes are made.

About 65 million years ago, early in the evolution of primates, an error in gene replication created a monster called Alu.

Alu is a transposon: a short piece of DNA ­– about 300 letters long ­– that is able to reproduce itself within the genome. It appeared when a gene necessary for protein synthesis was mistranscribed. It only had to appear once. Since then, Alu has been quietly copying itself in the genomes of primates – humans included.

Its success has varied over history. Right now it is believed to create one extra copy of itself in the genome every 200 generations or so; in times it has been more successful. In those periods, almost every child had one more Alu unit than its parent.

Like other genes, it is copied from one generation to the next; the effects are cumulative. Over the immensely long course of its existence, Alu has done well. Imagine huge segments of your DNA existing as the same sequence of letters repeating over and over again – not just hundreds of times, but millions of times.

65 million years after it first appeared as a mutation in a single primate, Alu now occupies 10% of your genome. 10% of your DNA is made up of these 300 letters, meaningless junk repeated over and over again, simply because in the great competition of evolution, Alu found a way to cheat.

But even Alu can mutate, adapt, evolve, and now there are subtle variations of Alu, superfamilies that compete with each other, trying to out-copy one another…

And the game goes on.

REFERENCES

Richard Dawkins: The Selfish Gene and The Ancestor’s Tale

The numbers vary from one paper to another; I took the above from “Alu Repeats and Human Genetic Diversity”, Batzer and Deininger; see also Molecular Biology of the Cell, second edition.

Information is news. But what is information? A note, written in pencil on a page, contains information; but information is not made out of graphite deposited on cellulose. A TV broadcast is information; but information is not made out of radio waves, and it is not the vibration in the air molecules between the radio and your ear.

It is formless, shapeless, intangible. Nevertheless, it can be measured, quantified, treated mathematically.

Information reduces uncertainty. Toss two coins in the air – a penny and a pound coin, say ­– and catch them in your hand without looking at them. You are uncertain about whether they are heads or tails. This uncertainty is measurable: if you had to guess the outcome of the toss, you’d have a 25% chance of getting it right.

But you peek at your hand. You see that the pound coin has come up tails, but you can’t see the penny. With this new knowledge of the state of one of your coins, you have a 50% chance of guessing right. The information has doubled your chances; the uncertainty has dropped by half.

Information is measured in bits. One bit, short for binary digit, lowers your uncertainty about the world by one half. It reduces the number of yes/no questions you have to ask in life by one.

Note that the information content of a fact, measured in bits, depends on how uncertain you were to begin with. If I tossed a double-headed coin and told you it came out heads, that statement contains no information. It was always going to happen; no uncertainty reduced. Conversely, if I tell you I rolled a die and it came out six, that fact is worth 2.6 bits ­– more than one bit, because one yes/no question would not have been enough to remove the uncertainty. Identifying a card pulled from a pack of 52 cards is worth 5.7 bits.

How much information is contained in the expression “It will rain in the UK tomorrow”? Not much ­– the answer isn’t surprising. You could have guessed anyway.

After a week trapped in a mineshaft, a woman is rescued. The news report quotes her as saying “She is glad to be out.” Not much information. You would be surprised to hear her say, “Actually I preferred it down there.” That would be information.

But if that was what she’d said, would the journalist report it? Information-rich or not, does it make a good story? Or would the journalist have quietly turned off the camera, choosing not to show those bits?

The tanks of liberation roll into a bombed-out city. A flak-jacketed war correspondent films crowds of people welcoming the brave soldiers who have freed them, cheering them on. Surely very surprising, very information-rich. But how photogenic would the opposite case have been? A crowd of angry shopkeepers whose lives have been wrecked ­– would that footage be aired? Perhaps not. Regardless of what happened, it is likely that any images shown will be positive. Positive, and unsurprising. Information content low.

Information is news – but not all news is information.

REFERENCES

Information theory stuff from Richard Dawkins’s essay “The Information Challenge” as found in The Devil’s Chaplain; definitely worth a read.