Tag Archives: more on The Eye

Ings argues convincingly that the eye has had a profound effect on our language, perception, philosophy and even consciousness… Ings deals with these, as he does all parts of this thoroughly engaging book, with refreshing clarity, enthusiasm and vigour. It’s a real eye-opener, if you’ll pardon the pun.Doug Johnstone, The Times, March 10, 2007

In The Eye: A Natural History, the novelist and science writer Simon Ings explores evolution’s alleged masterpiece from several perspectives, including optics, physiology, history, medicine and biochemistry. It is a rich and eclectic survey, with an intriguing nugget on almost every page. Ings has done his homework and is not afraid to find fault with new ideas that don’t pass muster. Graham Farmelo, Sunday Telegraph, March 24

[Ings] rightly asserts that ‘the story of the eye is epic’, and this is an impressive attempt to summarise its 538-million-year history. There are times when the encyclopaedic scale of the endeavour rather overwhelms the reader, but it’s easy to share his genuine wonder at the sheer oddness of some of the mechanisms of sight. P D Smith, the Guardian, June 2

In this ambitious work, Ings reaches into chemistry, evolutionary biology, anthropology, psychology, aesthetics and his own fertile imagination to produce an agglomeration of ideas and themes, aimed at neither the specialist nor the idiot, but somewhere tantalisingly in between. In many ways it’s the perfectly judged popular-science book: he assumes little or no prior knowledge, but he does take for granted an open mind and a certain curiosity. His book will bring out the intelligent 12-year-old in us all. You may even look on the world with new eyes.Marcus Berkmann, The Spectator, March 31(requires subscription; the review has been reprinted here)

Ings has succeeded in writing an elegant, entertaining and up-to-date overview of cutting-edge research. He tells the story “episodically”, in a “mix of history, science and anecdote” that is utterly compelling.Gail Vines, the Independent, April 25

Ings has a good eye for memorable anecdotes and striking facts. More importantly, The Eye is always readable, and Ings is a very good explainer of scientific concepts. Robert Hanks, the Telegraph, March 18

The evolution of the human eye sounds a potentially arid subject, but not as treated by Simon Ings, who seamlessly blends natural history with personal observation (the progress of his baby daughter), visual conundrums (illustrations punctuate the text), and a real sense of wonder. There are fascinating facts galore: our eyes are never still, for example. As well as entertaining, it’s philosophically profound: showing how our eyes, far from simply absorbing the world, are tools with which we construct our own reality. Katie Owen, the Sunday Telegraph, 27 January 2008

Voles communicate by leaving trails of urine that (happily for the hungry kestrel) reflect ultraviolet light. Simon Ings will cheerfully supply you with a whole feast of such tasty morsels in this expansive history of the eye. But while his book may satisfy a nerdy hunger for trivia, it is much more than just a compendium of information, ambitiously blending science with philosophy and drawing on history and anecdotes. The latter, which often focus on his daughter, are impressive for somehow avoiding mawkishness; in one of the book’s most moving sections, Ings considers their respective ageing and sight. It all makes for a surprisingly appealing and readable book, helped along by the odd judicious diagram. Read it and you will never see things in the same way.Hermione Buckland-Hoby, the Observer, January 27

The mirror of the soul? In Homo sapiens, maybe. But in this account of that remarkable organ, the eye, Ings goes beyond the human… The more complex his material, the clearer his prose becomes. He is equally at ease with mathematics, philosophy, palaeontology and history in this cornucopia of facts and folklore about the eye… this far-ranging and wonderfully eclectic work is popular science at its best. Ross Leckie, The Times January 25

Charles Darwin wrote that thinking about the evolution of the eye gave him a “cold shudder”. The organ’s complexity tested his theory to the limits, yet so necessary has eyesight become to species’ survival that some scientists estimate the eye has evolved independently at least 40 times. As Ings puts it, “There are only a handful of really good ideas in nature”, and eyesight is one of them. There is a lot of science in Ings’s account, but it is leavened by engaging forays into history and biography. He relates such fascinating subjects as the ability of the Thai Moken tribe to see underwater, and Woody Allen’s rare skill in raising the inner corners of his eyebrows.Ian Critchley, The Sunday Times, January 27

‘Popular science’ now too often refers to books about penguins’ feet and the like, so it is a relief to find a work that demonstrates genuine learning and intellectual passion. Simon Ings takes us elegantly through the 600-million-year history of the eye, explaining its differing functions in humans and animals, and discussing philosophical thoughts about vision. The result is a narrative as arresting and remarkable as any fiction, accessible but complete, with the reader assumed to be an intelligent adult on the lookout for something substantial. An excellent guide to one of the world’s true wonders. the Telegraph, February 2

Iris, the Greek goddess of the rainbow, carried the first messages from the gods to man; 3,000 years later, the flow of communication is to be reversed. There are plans afoot, as we learned this week, to harness our irises, those pretty rings of multicoloured muscle in our eyes, to reveal our identities to the Olympians of Homeland Security.

We’ll each need to earn notoriety first: the FBI’s data-sharing proposals, involving an entire suite of biometric data, are directed at catching major criminals and terrorists. The name the Feds gave this project, however, suggests that someone, somewhere, is looking to the future: “server in the sky”. This is either a tip of the hat to 80s rock band Doctor and the Medics’ only hit or, more likely, a grotesque piece of security-state triumphalism.

Mind you, we are all more than likely to offer up our eyes over the next couple of years to any institution that cares to stare into them. Iris scanning is set to replace the passport and credit card as the preferred method of proving identity. Who wouldn’t want to pass through Heathrow in a blink, after all?

But there is something unpleasant about the idea of having one’s eyes scanned, and this is not altogether the fault of the film Minority Report’s stolen eyeballs scene. It is more to do with our intuition that the eyes are windows on the soul. The human eye is built to be noticed. Simply opening the eyes wider can, with other facial movements, express everything from shock to arousal to doubt. Simple gaze direction conveys emotional meaning. The lateral rectus eye muscle is labelled “amatoris” in early anatomies because lovers use it to direct their flirtatious glances.

Eyes reveal our inner state. It is impossible to control our rate of blinking for any length of time, or the way our pupils wax and wane. When aroused, we blink more often, and our irises dilate. Our eyes, with their bright whites, colourful irises, responsive pupils, brows and lashes, have evolved to communicate and carry meaning.

Nonetheless, given the amount of information they carry, eyes are surprisingly hard to read. We don’t count each other’s blinks, and we don’t press our faces up against each other to study the changes in each other’s irises. Of course, we don’t have to: we have language – which lets us lie in a way the eyes don’t. But liars are easy to spot – aren’t they?

Humans have been pack animals for most of their history. When survival depended on cooperation there was little advantage to be had from blatant lying. In a tightknit community, a pathological liar stands to lose too much if they are caught out. Now, things are different. A 65-year-old, Jean Hutchinson, was sent to jail for five years this week. Why? From her secret operations room, accessed through a wardrobe, she had managed to impersonate 76 different people well enough to defraud the British state of £2.4m.

Technology confers anonymity on people far more effectively than it establishes identity. The biometric security market emerged in the US following the passing of two laws. Neither had anything to do with security, the war on terror or other bugaboos. One was the health insurance act of 1996, which made healthcare firms protect their clients’ records more carefully; the other, known as Sarbanes-Oxley, was meant to reduce the fiddling of financial records after the collapse of Enron.

The war on terror is a branding exercise. The war on fraud is real. The technology has a long way to go before machines are invented that can scan our eyes for the secrets of our hearts. Still, this is the path we are on. As our machines learn more about us, we are increasingly learning how to hide behind our machines.

Like those jingles you can’t stop humming, some bad ideas stick. This one has maddened me for years: when you and I see a green ball, do we see the same green? When we have toothache, we don’t all have the same toothache. The notion that pain varies between individuals does not disturb us. Why, then, do we resist the idea that different people see different colours?

Just as you and I, each suffering our own very different toothaches, can agree on what a lousy experience toothache is, so we can all roughly agree on what colour is what. We can argue till the cows come home whether this particular shade of turquoise is green or blue, but we both pretty much agree on what green and blue are. There is a lawfulness to colour, and it would help if we knew where this lawfulness resided.

In his 1995 essay The Case of the Colour-blind Painter, the neurologist Oliver Sacks describes the case of an artist who, through subtle but devastating damage to his brain, could not see colour. Though damage of this sort robs an individual of the experience of colour, the mechanisms of colour vision continue to function. Asked to match up coloured counters, people with no experience of colour are still able to match up colours perfectly. They just don’t see them. But if the relationship between wavelength and colour is purely contingent, where the devil do colours come from?

Artists are forever trying to uncover universal meanings behind their colours. It is easy to scorn their efforts, not least because this kind of thinking dates very quickly. Kandinsky’s experiments in colour symbolism may as well have been conducted in the 14th century for all their relevance now. There is, none the less, a growing body of evidence that colours, shapes, sounds and smells do have meanings. Wolfgang Köhler’s delightfully simple 1929 experiment asked volunteers to match a pair of abstract figures to one of two nonsense words, “maluma” and “takete”. Immediately, and virtually without exception, people matched maluma to the soft round figure and takete to the sharply angular one. Some sort of shared symbolism related the sounds to the shapes.

Now Dr Jamie Ward, at University College London, might have uncovered an underlying symbolism to colour. Ward’s interest is synaesthesia – the experience of a handful of individuals who perceive information through an unexpected sense. Some hear colours, others smell shapes. The vast majority see sounds. The experiences of individual synaesthetes are notoriously idiosyncratic. But there are unexpected regularities, and Ward’s bulging address book – he knows 450 synaesthetes by name – allows him to spot trends that were formerly invisible. For example, among synaesthetes who see coloured letters, A is often red, B is often blue, and C is often yellow. “This is likely to hold true for other types of synaesthesia,” Ward says, “assuming that we are able to make a large enough number of observations. For instance, certain musical instruments may tend to produce particular colours, shapes and movements.”

Synaesthesia may simply be an exotic manifestation of something we all enjoy: the ability to turn sensations into symbols, and to think with them. After all, if our thoughts are not made of sensations, what are they made of? And this is why we find it so distressing, you and I, to realise that we don’t see the same colours. Colours – so striking, so beautiful, so manifestly there – are one of the few things we can agree on, more or less. How cast adrift will we feel if colours turn out to be, after all, only our thoughts about light?

Human colour vision is a relatively recent acquisition. It is, at most, 63 million years old, and it may be a lot younger. On a genetic level, it is a mess: misalignments and redundancies in the genes that code for our “red” and “green” colour perceptions account for 95 per cent of all variations in human colour vision, and it is quite usual for up to nine genes to cluster together in an attempt to code for these colours. This is why the perception of colours – especially blues and greens – varies so much between individuals. Humans perceive colour through three types of colour-sensitive cell, called cones, but some have four types. Equipped with four receptors instead of three, Mrs M – an English social worker, and the first known human “tetrachromat” – sees rare subtleties of colour. Looking at a rainbow, she can see 10 distinct colours. Most of us only see five. She was the first to be discovered as having this ability, in 1993, and a study in 2004 found that two out of 80 subjects were tetrachromats.

WHY YOUR EYES NEVER STAY STILL

If our eyes did not move – if they simply “drank in” the view before them – we would go blind. Our retinas can only process contrast, and soon become exhausted looking at the same thing for too long. They must tremble constantly in order to bring still objects into view.

THE SIGHTS WE ALL MISS

Human vision captures only two degrees of the world with any clarity, so we tend to miss things that happen outside our focus of attention – and the more we concentrate, the more extreme our “attention blindness” becomes. This makes us easy prey for psychologists such as Daniel Simons and Christopher Chabris, whose notorious experiment of 1999 asked its viewers to score a three-a-side, 90-second basketball game. Afterwards, the viewers were told to relax, put down their score cards and watch the video again. Only then did the game’s most remarkable feature come to light: the invasion of the court, a few seconds in, by a 7ft-tall pantomime gorilla.

A VISION OF THE FUTURE

Our eyes stay several steps ahead of us, whatever we happen to be doing. When negotiating a turn in the road, for example, a driver’s eye will provide motor information to his or her arms almost a second before he or she makes any movement. By then, the eyes will already be looking elsewhere. Visually at least, we operate in the world not as it is, but as it existed half a second ago. This raises a not insignificant question: how does the eye know where to direct its gaze next?

THE CURE FOR BLINDNESS

The concept of a bionic eye is nothing new. In the 1970s, bio-engineer Paul Bach-y-Rita, now at the University of

Wisconsin-Madison, was turning different parts of the body into eyes. His prototypes were vests containing hundreds of mechanical vibrators. Pixelated images from a low-resolution video camera, worn on a pair of glasses, were translated into mechanical vibrations against the skin of the chest or back. Bach-y-Rita’s volunteers were able to recognise faces using the system. Proof that they could see came when Paul threw balled-up papers at them: they ducked.

SEEING BENEATH THE SEA

Because light behaves differently in water and air, land-adapted human vision is lousy in water. Someone, however, forgot to tell the Moken – gypsies who ply the Burmese archipelago and Thailand’s western coast. Moken children, who spend days diving for clams and sea cucumbers, can see twice as much fine detail underwater as European children. While the pupils of the latter expand underwater, in response to the dimness of the light, Moken pupils shrink to their smallest possible diameter, improving acuity underwater. Mokens also use the lenses of their eyes more, squishing them to the limit of human performance.

By April, Natalie had begun, now and again, to sleep. Every so often she closed her lids and kept them shut, and when she did, her eyes trembled, stirring the oxygen-rich liquid, called aquaeous humour, that lies between the iris and the cornea. (The rapid-eye movement that accompanies dreams has at least one very practical purpose: it feeds the front of the eye.(1)) This trembling woke the light-sensitive cells of Natalie’s retinas. Stimulated, her retinal cells fired at random, preserving and strengthening their connections with each other.(2) Even before she saw, Natalie went through the motions of dreaming, and those motions taught the cells in her retina to hold hands.

Prepared by dreams, Natalie’s retinal cells took their next lessons from light. Even before birth, the body is no stranger to illumination. Flesh itself lights up a little, every time a nerve fires(3). Perhaps this familiarity with light is why any young nerve cell, transported to the retina – the body’s most light-sensitive surface – will learn to see, just as every seeing cell, moved elsewhere, becomes an ordinary nerve(4).

The womb is not dark: it is easily penetrated by light from the outside world. From November to July of 2003, the month of her birth, Natalie’s retinas grew to seize what news they could from the amniotic murk of her home. They adapted to darkness and to blur. One layer of nerves grew into light-sensitive cells called rods, the better to gather the light. At birth, Natalie was well on the way to acquiring good nocturnal vision. Babies see well at night.

In the glare of day, though, they are all but blind. (It is one of the ironies of birth that it fills our world with light – and blinds us in the process.) A sunny day is a million times brighter than a night-time nursery, and a whole other form of vision is needed to handle such a glare – a form of vision Natalie had not yet got.

Exposed to the light, Natalie’s eyes had not so much to ‘adapt’ to the brightness of day, so much as acquire a whole new way of seeing.

In her retinas lay another set of nerve cells, distinct from the rods and only distantly related to them.(5) At birth, they were little more than ordinary nerve cells. Once exposed to the glare of day, however, they began to change. Natalie will be six before these ‘cones’ of hers are fully grown, packed as tight as they can be into the fovea – that tiny circle on the retina where images are focused and light explodes with colour.

(5) The distinction between rods and cones had already arisen in the eyes of jawless fish, swimming in Devonian seas around 400 million years ago. See Bowmaker, J. K. 1991. ‘Evolution of photoreceptors and visual pigments.’ Pp. 63–81 in J. R. Cronly-Dillon and R. L. Gregory, eds. Evolution of the eye and visual pigments. CRC Press, Boca Raton, Fla.

Since the early 1970s, Paul Bach-y-Rita has been building prosthetic eyes for the blind: not false eyes, not glass eyes, but fully working organs of vision. With them, Bach-y-Rita – a biomedical engineer at the University of Wisconsin-Madison – has helped the blind to see.

His eyes do not look like eyes. The earliest models look like clothing. Bach-y-Rita’s vests are worn either across the stomach or across the back. Sewn into the material are 256 mechanical vibrators (nicknamed ‘tactors’ because, when they’re activated, the subject can feel their touch). A computer worn at the hip recieves pixellated images from an ultra-low resolution video camera, worn on a pair of eyeglasses, and translates these images into mechanical vibrations, via the tactors. The upshot is a kind of Braille or Pin Art vision.

Bach-y-Rita’s subjects reported that after a couple of hours they were no longer aware of the tingling sensations generated by the vest. They were able to navigate between obstacles, and, eventually, to recognise faces. When the ‘view’ before them changed – because they moved, or because something moved in front of them – they reacted appropriately to the change of view. If you screwed up a piece of paper and threw it at them, they would duck.

Even more suggestive is an experiment reported by Daniel Dennett in which a researcher, without warning, manipulated a zoom button on a volunteer’s camera, making it seem as though he were hurtling forward. The volunteer raised his hands to protect his face. But his vest was strapped to his back.(1)

The artificial sense bestowed upon his blind volunteers by Paul Bach-y-Rita not only works like vision – it feels like vision. It seems that the mind is not overly fussy where it gets its sensory information from. What matters is what ‘shape’ the information takes. If visual information is received through the skin of your back, it only takes your brain a couple of hours to start seeing through your back. If your back starts itching, on the other hand, you won’t mistake the itch for a flash of light. The ‘shape’ of an itch is different to the ‘shape’ of, say, a face, and the brain knows how to deal with each.

The senses become specialized over evolutionary time, but they are never entirely compartmentalised. If we look closely at a rod – a photosensitive cell common to almost all vertebrate eyes – we see that it comes in two parts – a fairly normal-looking cell body, and a column made up of thousands of discs containing the pigment rhodopsin. When the rod is exposed to light, the pigment column expands like a slinky to twice its length, with no increase in width. In the dark, it contracts again. Each rod is behaving just like a muscle cell – and for good reason. In many functional respects, it is a muscle cell. Muscle fibres expand and contract in response to electrical stimulation. The retinal rod, too, is responding to an electrical signal – one that comes, not from a nerve, but from a biochemical reaction to light. This is what the working retina looks like on a cellular scale – a vast automated Pin Art machine.

For an overview of Paul Bach-y-Rita’s work, see Paul Bach-y-Rita, Mitchell E. Tyler, and Kurt A. Kaczmarek. 2003. ‘Seeing with the brain.’ International journal of human-computer interaction, 15/2:pp285-295.

In early 2001, the University of Wisconsin-Madison’s article Tongue seen as portal to the brain first broke the news of Bach-y-Rita’s return to the sense-substitution field. (Since the late 1970s, he had turned his attention more towards to the rehabilitation of victims of brain damage.) The latest applications of Bach-y-Rita’s work are discussed in Blakeslee,S. 2004. ‘New Tools to Help Patients Reclaim Damaged Senses.’ New York Times, November 23.

How well do you see in the dark? Edward Halsall, a royalist major during the Cromwell era, was imprisoned for twenty months in a windowless room. It took Halsall’s eyes seven months to adjust fully to the dark, but by the end of his imprisonment, he ‘could see the mice that used to feed upon his leavings’; ‘well enough’, indeed, ‘to make a mousetrap with his cup.’ Humans have excellent night vision. (We are, after all, the descendants of nocturnal shrews.) And it’s by juggling two quite distinct forms of vision – one adapted to the dark, the other to the light – that our eyes can cope with virtually any lighting conditions.

This is as well: on a sunny day, our eyes receive a million times as much light as they can gather on a clear, moonless night. How can our eyes cope with such staggeringly different light levels?

In 1867, a young physicist called Ernst Mach pondered this optical illusion. Arrange a series of grey bands, each band slightly lighter than its neighbour, and they look as though they have been lit from the side. The edges lying against darker neighbours appear lighter, while edges lying against lighter neighbours appear darker. The fluting is an illusion, obviously – but why should the eye manufacture dark where there is no dark, and light where there is no light?

Spotting boundaries is essential to vision. Without boundaries, the edges of objects become uncertain, and objects simply bleed away into the background. So the eye manufactures shading to reveal the forms of objects. Mach worked out the mathematics of how the eye could do this. It was a brilliant piece of work, still used today. (Bang & Olufsen’s new televisions handle contrast and picture detail in an intelligent manner by applying algorithms first dreamt up by Mach, nearly a century and a half ago.)

In the 1930s, American physiologist Haldan Keffer Hartline identified the parts of the eye that performed Mach’s mathematical magic tricks; and, in doing so, he discovered something surprising. When the eye studies an evenly illuminated surface, its optic nerve falls silent. The eye can handle a million-fold difference in light level because the eye doesn’t measure the light level at all. All it ever reports are small, local variations in light intensity. Look very closely at portrait of Che Guevara, – a delightful visual puzzle dreamt up five years ago by Dr Steven Dakin of University College, London. You will see, if you look closely enough, that the lit parts of Che’s face are exactly the same shade of grey as his beard and facial shadows. It’s the banded line that tells your eye which side of the line is supposed to be light, and which side is supposed to be dark – and it’s your eyes that then add shading to the picture.

Our perception of colour, too, is a matter of contrast. Vivid as the colours around us seem, their brilliance is manufactured in the eye. Our eyes gauge the brightness, hue and vividness of patches of colour by relating them to the shade, hue and vividness of their surroundings, and we can draw figures, like the ones here, to show how the same colour looks very different when it appears in different surroundings.

Simple figures like these seem to trick the eye into error. But in the rich visual environment of the real world – a world full of multiple light sources, shimmering reflections, dappled shadows, and complex three-dimensional patchworks – our style of vision enables us to identify the colours of things with extraordinary accuracy.

Oddly, this point that was lost on vision science until midway through the last century, and the arrival of Edwin Land. Land was, after Thomas Edison, America’s most prolific inventor. Polaroid photography is just one of his inventions. Land’s startling experiments and demonstrations showed how robust our colour vision is under different lights. He prepared boards of intersecting multicoloured shapes (called ‘mondrians’, after the artist whose work they resembled) and lit them with lamps of different hues. People studying the mondrians described their colours accurately even under the most bizarrely tinted lighting.

But Land’s most famous ‘experiment’ happened by accident. Land and his team were using red, green and blue lights to produce a true-colour image on a screen (cathode-ray televisions work this way). Come evening, Land and his assistants shut off the blue projector and took the green filter out of the green projector. It was then that one of the assistants called Land’s attention to the screen. The red projector was still running, and the unfiltered green projector was projecting its image over the top of the reds in white light. And there, upon the screen, was the original full-colour image. Red and white lights were throwing blues and greens upon the screen! Land realised that the eye was using the little information it had to colour in the image, just as your eye shades in the portrait of Che Guevara.

Our eyes make things up. They snatch trickles of light from a world of blur and shadow, and they manufacture pictures of the world that are both coherent and true. The optical illusions on these pages do not ‘fool’ the eye – rather, they persuade it to reveal its creative power. They show us why, in the real world, we can believe our eyes.

‘My boy, I wish you to witness an experiment.’ He drew from its case a powerful microscope of French make.

‘What on earth are you going to do, sir?’

The doctor’s brilliant eyes flashed with a mystic light as he replied: ‘Find the fiend who did this crime—and then we will hang him on a gallows so high that all men from the rivers to the ends of the earth shall see and feel and know the might of an unconquerable race of men.’

—Thomas Dixon Jr, The Clansman (1905)

Odile Gaijean was a martyr to her son’s fecklessness. During the 1980s Odile, a resident of Eastern Alsace, had run up huge debts to bail her son out of a financially disastrous stay in Dakar, the capital of Senegal. When he turned up on her doorstep and announced that he was married, his mother was less than impressed. Then his wife turned up – a former prostitute called Fatou Sarre. Domestic bliss it wasn’t. The feud between wife and mother-in-law began with strong words. Soon, though, the women were trading blows. One day Odile, beside herself, picked up a knife. Fatou, not to be outdone, picked up a hammer. Fatou won.

When it was done, she dragged her mother-in-law out of the house and into the barn. Afraid of what might happen to her if her husband found out it was her who killed his mother, Fatou set about destroying the evidence of her crime. In March 1990, on trial for murder, Fatou explained to the court that she did not dare leave Odile’s dying vision intact, for it would surely show her brandishing the hammer. This is why she gouged out her mother-in-law’s eyes.

Three years later, there was a similar case, involving a hapless Cameroonian housebreaker. The press detected a pattern. Was France witnessing a reawakening of ancient ‘Cameroonian legends’ and ‘African beliefs’? Actually, no. On further interrogation, it turned out that Fatou’s ‘beliefs’ – such as they were – dated back to a Bollywood movie she had caught one night in Dakar. It was a whodunnit, based loosely, not on ‘ancient legends’ (Cameroonian or otherwise) but on a much more recent source – European detective fiction.

The short story ‘Claire Lenoir’ caused a minor sensation on its publication in Paris in 1867. Huysmans refers to it in his notorious novel A rebours, and twenty years later, there was still enough milage in the idea that the author, Villiers de l’Isle-Adam, expanded it into a successful novel. Claire Lenoir is the wife of Césaire, an old friend of the story’s narrator, Tribulat Bonhomet. On his way to see them both, Bonhomet befriends Henry Clifton, a young English naval lieutenant who has had himself posted to the South Seas to cure himself of an ill-fated love affair with a married woman who bears more than a passing resemblance to Claire. Later – just before he meets the Lenoirs – Bonhomet stumbles over the following news report:

‘L’Académie des Sciences de Paris has stated the authenticity of certain surprising facts. It can be asserted that the animals destined to our nourishment – such as sheep, lambs, horses and cats – conserve in their eyes, after the butcher’s death stroke, the impression of the objects they have seen before they die. It is a photograph of pavements, stalls, gutters, of vague figures, among which one almost always distinguishes that of the man who has slaughtered them…’

It is an ill omen, and sure enough things rapidly take a turn for the worse. His friend Césaire falls desperately ill. Somehow Césaire has discovered his wife’s infidelity; and there on his death-bed, he swears a terrible oath of revenge. A year later, news arrives that Henry Clifton has been brutally murdered by a vampiric assailant. Césaire’s widow Claire takes to her bed, driven out of her mind by persistent nightmares. Bonhomet is with her when she dies and, noticing a blurry image lingering in her eyes, he examines them with the aid of his trusty ophthalmoscope. What he sees,

‘no language, dead or alive… could, under the sun and under the moon, express in its unimaginable horror… I saw the skies, the far-off floods, a great rock, the night and the stars! And upright, on the rock, larger in height than the living, a man… lifted with one hand, towards the abyss, a bloody head, with dripping hair… and, in the severed head, the features, frightfully obscured, of the young man I had known, Sir Henry Clifton, the lost lieutenant!’

It is strange at this remove to imagine the popular curiosity generated by the ophthalmoscope – a device, combining magnifying lenses and a light-source, with which one can view the inside of the living eyeball. The nearest contemporary equivalent would I suppose be the craze, a few years ago, for those ‘magic eye’ posters that conceal three-dimensional figures in a computer-generated pattern of apparently random blobs. They too, as we have seen, have their serious scientific side, and now that the fashion for them is past, they, like the ophthalmoscope, have by and large slipped out of the public gaze, back into the somewhat rarefied world of vision science.

The way de l’Isle-Adam collides the two great scientific crazes of his day – ophthalmology and photography- – is a splendid example of the fantastic at work. The eye is the soul’s camera, and the ophthalmoscope reaches in to spy upon the soul!

If the idea seems trite today, it has only become so through repetition. For de l’Isle-Adam’s readers, ‘Claire Lenoir’ spoke to the bodily anxieties generated by the era’s huge advances in surgery and imaging, as surely and as disconcertingly as the the films of David Cronenberg speak to the bodily anxieties of our own, increasingly anonymous and ‘virtualised’ day. Exposed in this way, the body reveals its affinity with mechanism: the eyes are cameras, and to the well-equipped observer they reveal the secrets of the heart as surely as a daguerrotype reveals in the interior of the sitter’s drawing room. This is a great opportunity – but a risky one, at least in the uncanny world of ‘Claire Lenoir’. At some level, humans are not machines; they are messy, unpredictable, generators of nightmares. You can look, by all means – but you may not like what you find:

‘And Science, the old queen-sovereign with clear eyes, with perhaps too disinterested a logic, with her infamous embrace, sneered in my ear that she was not, she also, more than a lure of the Unknown that spies on us and waits for us—inexorable, implacable!’

Such was the public’s fascination for the new-fangled excitements of forensic science, the fictions that followed ‘Claire Lenoir’ become steadily less like ghost stories, and more like police procedurals. In 1897 Jules Claretie wrote ‘L’Accusateur’, in which the eye of a murdered diplomat, dissected by the police, reveals the blurred image of the face the victim saw as he died. Jules Verne’s Les Frères Kip (1902) sits even more comfortably within the detective genre in its tale of innocent brothers, arrested for a murder they did not commit but released, at the eleventh hour, on the evidence of a retinal photograph. Three years later, retinal photography reveals the identity of a rapist in Thomas Dixon Jr’s jaw-droppingly racist novel The Clansman – the basis for D W Griffith’s film The Birth of a Nation.

Philosophers and scientists could no more resist the analogy between vision and photography than the reading public. In 1859, Charles Darwin’s Origin of Species had explained how, given large enough numbers, time, and a testing environment, living things acquire an efficient, apparently designed form. In acquiring vision, had nature anticipated the very solutions hit upon by those pioneers of photography, Wedgewood, Niépce and Daguerre?

In order for the retina to behave like a camera’s plate, it had to contain some light-sensitive chemical, analogous to the silver nitrate film covering the glass slides on which the earliest photographs – Daguerreotypes – were captured. The presence of a reddish pigment, at least in the retinas of frogs and squid, was established in 1851 by the German anatomist Heinrich Müller (1820–1864). In 1876, a brilliant young compatriot of his, Franz Christian Boll (1849-1879), narrowed the pigment’s source down to one particular group of cells – the ‘rods’. The unusual shape of these cells, coupled with the way they were packed so closely together across the retina, was a clear indication that they possessed some specialised optical function. Were these the cells that saw?

Boll had noticed that when a frog died, the reddish colour of its retinas bleached away, so that after about a minute, the retina appeared colourless. At first he thought this was a consequence of the frog’s death. But then he discovered that even dead frogs retained healthy reddish retinas for up to twenty-four hours, provided they were kept in the dark. If light bleached a pigment stored in the ‘rod’ cells of the frog retina, might that chemical event not convey the impression of light to the frog’s brain? In 1879, Boll’s experiments were cut short: Boll was only thirty when he died, a victim of the tuberculosis that had blighted his otherwise glittering professional life. But he had done enough by 1877 to convince his peers at the Berlin Academy that ‘This change in the outer segments of the rods forms indisputably a part of the process of vision.’ Among Boll’s admirers was Wilhelm ‘Willy’ Kühne (1837-1900). A powerful figure in the field of physiology, Kühne had recently succeeded Hermann von Helmholtz as professor of Physiology at Heidelberg. Kühne took up Boll’s discoveries with ‘fiery zeal’.

Boll had called his pigment ‘visual red’; Kühne reckoned the stuff was more purple than red, and changed the name accordingly. Otherwise, Kühne agreed with Boll – and acknowledged his debt to him – on almost every important point. If the retina was the eye’s ‘photographic plate’, then Boll was surely right, and ‘visual purple’ was the eye’s equivalent of the camera’s silver nitrate. Kuhne hoped to complete Boll’s work by extracting evidence of this process from the retina itself. Working with the simplest equipment in a darkroom, Kühne found a way to maintain retinas long enough that he could study them closely. The dissected retinas remained purple in the dark. Light bleached them, but not all at once. Kühne observed distinct stages in the bleaching process, from purple to orange, to yellow, to buff, until at last the retina became entirely transparent.

Fifty years passed before any new knowledge was added to the meticulous description of visual purple assembled by Kühne over two busy years. Indeed, the pile of unanswerable questions Kühne had amassed concerning the stuff might have altogether obscured his actual achievement, had he not gone to extraordinary and repeated efforts to publicise his central finding: that the eye behaves like a camera.

On November 16, 1880, in the nearby town of Bruchsal, a young man was beheaded by guillotine. The body was taken to Heidelberg, where Kühne, in his capacity as Professor of Physiology, was waiting. In a gloomy room, its few windows screened with red and yellow glass, Kühne dissected the dead boy’s eyes. Ten minutes later, he was able to show colleagues a sharp pattern on the surface of the left retina. This, Kühne told the company, was an optogram: a dying vision, preserved as a pattern in the delicate and unstable visual pigment called visual purple. The trouble was, Kühne’s sketch failed to match any object that may have been visible to the felon as he died.

We know now that Kühne had been confronted – and bested – by a curious specialism of the human eye. Its area of focused vision – the fovea – is tiny in comparison to the size of the retina as a whole. Even more puzzling, the fovea is largely rod-free. Kühne had better success recording the optograms of frogs and rabbits. The following account of Kühne’s early and modestly successful ‘optography’ is by the biochemist George Wald – himself a Nobel prize-winner for his studies of visual pigments. (Here, Wald refers to visual purple by its current preferred name: rhodopsin.)

One of Kühne’s early optograms was made as follows. An albino rabbit was fastened with its head facing a barred window. From this position the rabbit could see only a gray and clouded sky. The animal’s head was covered for several minutes with a cloth to adapt its eyes to the dark, that is to let rhodopsin accumulate in its rods. Then the animal was exposed for three minutes to the light. It was immediately decapitated, the eye removed and cut open along the equator, and the rear half of the eyeball containing the retina laid in a solution of alum for fixation. The next day Kühne saw, printed upon the retina in bleached and unaltered rhodopsin, a picture of the window with the clear pattern of its bars. The key word in this passage, of course, is fixation. A living retina has no interest in fixing images. Far from capturing perfect stills, it offers up a constantly changing view. As Kühne himself wrote, ‘the retina behaves not merely like a photographic plate, but like an entire photographic workshop, in which the workman continually renews the plate by laying on new light-sensitive material, while simultaneously erasing the old image.’

It is impossible now to say whether Kühne’s macabre autopsy on the felon from Bruchsal was a rational and necessary extension of his laboratory work, or an aberrant piece of showmanship. Certainly by then his work and the work of his predecessor Franz Boll had been widely popularised in magazines across the globe. In Britain, Fortnightly Nature, Athenaeum and Nineteenth Century ran articles, as did Harper’s Weekly in America. Also, we should not under-estimate the pressure Kühne and other researchers were under to extend the frontiers of forensic science. Since 1857, when the first, probably apocryphal story of optography appeared in Notes and Queries, curiosity about the possibilities of forensic photography regularly found its way into court transcripts. By 1869 the Society of Forensic Medicine in France was so concerned that the courts might start admitting ‘optographs’ as evidence in murder trials, they asked Dr Maxime Vernois to conduct a scientific feasibility study. Vernois set about his task with enthusiasm, ‘violently’ slaughtering seventeen experimental animals ‘under bright lights’. They died in vain: ‘It is impossible to find upon the retina of a victim the portrait of its murderer or the representation of whatever object or physical trait that presented itself to its eyes at the time of death.’

Despite the failures recorded by Vernois and by Kühne, enthusiasm for the idea persisted in spicing up genuine murder investigations. In 1888 Walter Dew – a London policeman famous, years later, for catching the murderer Dr. Hawley Harvey Crippen – witnessed ‘the most gruesome memory of the whole of my Police career’ when he entered 13 Millers Court in Whitechapel, where lay the disordered remains of Mary Kelly, a victim of Jack the Ripper. ‘Several photographs of the eyes were taken by expert photographers with the latest type cameras,’ Dew remembered in his autobiography, in the ‘forlorn hope’ that an image of her killer was retained on the retina. Is Dew’s memory reliable? Perhaps not: but the idea was certainly on everyone’s mind. At the inquest of Annie Chapman, another of Jack’s victims, the jury foreman asked police surgeon Dr George Bagster Phillips, ‘Was any photograph of the eyes of the deceased taken, in case they should retain any impression of the murderer?’ ‘I have no particular opinion upon that point myself,’ Phillips replied. ‘I was asked about it very early in the enquiry and gave my opinion that the operation would be useless, especially in this case.’

There is, as far as I have been able to ascertain, only one, extremely dubious example of a conviction attained by means of an optogram. It is from the Sunday Express and is quoted in Veronique Campion Vincent’s invaluable paper ‘The Tell-Tale Eye’, published in the journal Folklore in 1999. It seems that readers of the edition published on July 4, 1925, thrilled to the tale of Fritz Angerstein, a resident of Limberg in Germany, who killed eight members of his household with a hatchet. The police, seeking a quick confession, told the court that they had taken a photograph of the retinas of Angerstein’s gardener – a picture which revealed ‘a picture of Angerstein with his raised arm gripping a hatchet.’ Hearing this, Angerstein threw in the towel straight away and confessed to the killings.

And the police photograph? Most likely, it was simply a tale, spun to demoralise a gullible defendant. In any event, it was never made public.