-by Francisco Rebolledo-

Published by the Fondo de Cultura Económica under the
auspices
of the Secretary of Public Education and the National
Council of Science and Technology.

INTRODUCTION
IN SEPTEMBER of 2003 they invited me to participate in an editorial enterprise
that the newspaper Milenio was going to initiate. It consisted of a
cultural supplement that would appear every Saturday and would carry the Roman
title of Labyrinth. I accepted the proposition, not without a certain
trepidation, for the fact of committing myself to writing an article every two
weeks was something new for me and I was not very sure I could fulfill it,
given that I have an enormous respect for the written word and it costs me a
great deal of work to draft anything; and also, I was not very clear as to what
my collaborations would treat.
This last I resolved with relative ease. The three themes that have
impassioned me throughout the length of my life are literature, history and
science. I then decided that my submissions would treat of one of them every
two weeks; two weeks that then were converted to a week, for, despite my
original trepidations, after three months of having begun my collaboration in
the supplement, I promised to deliver my articles every seven days. I gave the
name, "From the Ravine" to the column where it appeared in that I live in
Jiutepec, very near to the Amanalco ravine, which Malcolm Lowry immortalized in
his ineffable novel, Under the Volcano.
Like an industrious ant, after more than three years I have accumulated
close to 200 articles on the most varied themes, of which about half are
dedicated to science and are those that I present in this book.
As you could imagine, some articles I wrote with the impulse of a recent
occurrence; others, attending to some ephemera, and others more simply because
I felt like covering a certain topic.
I present them here not in the order in which they were written, but
grouping them in six great themes: Physics, Astronomy, Chemistry, Biology and
evolution, Ecology and the Development of science. Sometimes I wrote an essay
on the same matter in various submissions. Here I present it with its parts
integrated, except in some cases when I considered it more prudent to bring
them to light such as they were published in the column.
It is not mine to judge whether the material that comprises this book is
worthwhile to be read, and whether its content even assists a little in the
comprehension of the fascinating world of science and, above all, in motivation
for studying it, which were the propositions that guided my pen.
It is you to whom it is called to judge that, kind reader.
PHYSICS1. Ah, time!
UNTIL the final stages of the 19th century physicists defined time as an
independent variable. That is, something in which things occurred without
anything happening to itself. That unhappy first-born of classical physics
roamed terrifyingly alone, like an aged river, from an uncertain origin towards
an impossible end, carrying its independence on its back, perhaps lamenting its
vacuity and surely distressed before the somber perspective of its infinitude.
Because, in effect, classical time was infinite. No recourse remained for
it since, had it not been so, if it had had a beginning and an end, however
prolonged the duration one might measure between them, time would not be
independent, would change (would have a before and an after, there being a
young time and an old time), and how could it change without the existence of
something independent that would be there precisely for this, so that things
change? The responsibility of not changing so that things may change, the need
to be infinite so that everything might have an ending, the somber requirement
to be witness of all that occurs and at the same time not to participate in
anything, to--finally--find oneself obliged not to exist so that everything
else might exist, only can be seated upon the wide shoulders of God. I believe
that this is why the Mayan wise men venerated time as the only true god.
Nietzsche, one of the philosophers who best understood the science of his
age, discovered an impeccable syllogism in the infinitude of time: if time is
infinite, and things occur in time, then all things will re-occur infinitely.
That which was, is and will exist until the end of time; but since time has no
end, that which was, exists and will continue to do so until infinity; that
which was, then, will eternally return.
Thomas Mann, as were many other writers, was obsessed with time; such
that, one could say that time is the protagonist of his colossal Magic
Mountain. Nevertheless, the time that struck Mann with its most pointed
questions is not the old and immutable Chronos of the classical physicists; in
any event, it is an illegitimate son of his, or perhaps not even that: it may
be simply a parvenu who borrowed the name from him, although with a very
different surname: the time that intrigued Mann, that which makes his novel be
like an infernal carnival machine, with eternal intervals, sleeplike, as are
his very characters, who eat and rest, rest and eat, in a play of mirrors and
an impossibly slow rhythm, and other intervals, so vertiginous, so brief and
intense that they can better be called instants, in which, after reading who
knows how many pages in which one only speaks of snow and of more snow, not
even a couple of hours have passed. It was said that the time which interested
Mann, as also occurred with Bergson, is psychological time. This last, as
opposed to its elder parent, which rules the physical world, is content to
validate the changes that occur in the human mind, and like the creatures it
governs, is capricious, unpredictable and fickle. That strange and cruel
quality of psychological time has already been fully discussed: it flows most
vertiginously the more we desire it to slow, and most phlegmatically when we
most urge it to advance.
I do not wish to end this submission leaving our intimate classical time
so sadly desolated. In reality, at the beginning of the last century the
genius of Albert Einstein brought very welcome news concerning the old
Chronos: he discovered that it was neither independent, nor was so isolated
and--best of all--that it was not infinite (too bad for the eternal return).
15 September 20032. Elusive reality
AMONG many other things, the 20th century brought a revolution to physics. And
the essence of classical time was among the most affected by this revolution;
one of the principal consequences of the special theory of relativity, which
Albert Einstein proposed in 1905, is that time was not, as was thought up to
then, an independent variable. And if it was not, it would have to in turn
depend on something, and that something is, neither more nor less, the speed of
light in space. That constant, difficult to imagine due to its enormous
magnitude (in one second light can go around the earth more than seven times)
subordinated time; the theory of Einstein tells us that time transpires more
slowly the more rapidly an object moves. Thus, if an object should attain
(something that is impossible in practice, yet not impossible to imagine) that
limiting velocity, time simply would cease to pass in it. The general theory of
relativity, which the same Einstein proposed in 1915-1916, made the nature of
time even more strange: not only did it cease being an independent variable,
but also it became a coordinate feature to locate the movement of a body in
space. Because, for this theory, space already was not that tridimensional cube
of infinite dimensions that Newton conceived, but instead a spatial-temporal
continuum of four dimensions, whose properties are subject, like everything
else, to the speed of light, the only absolute in that universe of relativity
discovered by Einstein.
It is not difficult to imagine the wide possibilities that the theory of
relativity bestowed on the writers of science fiction. Travels through time
(which before the theory of relativity had already been conceived by H. G.
Wells in his Time Machine) filled an infinity of pages with words and an
infinity of feverish minds with fantasies. But these new and enigmatic
qualities of relativist time interested not only the writers of science
fiction. Authors who were very far from practicing this genre also were
seduced by the new behavior of space and time that the German physicist
discovered in nature. Among them Lawrence Durrell comes to my mind, who,
in a clarifying note in Balthazar, the second part of his Alexandria
Quartet, says: "Since modern literature does not offer us unities, I have
turned toward science to produce a novel like a ship with four bridges whose
form is based upon the principle of relativity." Thus, the first three parts
of the novel occur in the same space (Alexandria) and time (roughly more or
less the interval that spans the beginning of the Spanish Civil War to the eve
of the Second World War), and the last part covers several years afterwards,
those which will come to be the temporal coordinator of relativity.
I do not know up to what point one can say whether Durrell's novel
corresponds to the Einsteinian spatial-temporal continuum or not, nor do I
think it matters very much. What is indubitable is that the English author
managed (and in that enviable manner in which intelligence becomes poetry)
to decode the absolute relativity of reality: a single incident, a duck hunt
on Lake Mariout, for example, is seen in a most different way by each one of
the characters who participate in it; the same event is simultaneously
different for each observer. And which is the true occurrence, or where is the
truth in it? That only God or the omniscient narrator know. For
Pursewarden, the tragic protagonist of the novel, it cost him his life to know
it. Perhaps in this intrinsic incapacity to apprehend the truth lies the human
predicament. Perhaps therefore we are condemned to repeat our errors time and
again, because, in the final account, we are the first victims of the belief
that there is an objective reality, susceptible of transformation to our
liking according to our acts.
Finally, as Garcia Márquez once said, we are subjective because we
are subjects. The only finality is the constancy of the velocity of light in
space, although I do not think that that consoles us very much.
22 September 20033. The fleeting instant
"THE PRESENT is motionless," Octavio Paz repeats with vehemence in his poem,
Wind from All Compass Points. In this verse I find a fine perception,
the perception belonging to the poet: the support for the perpetuation of the
present is in the devastating reality of the instant. Because from that
continuous, immutable time, like a tireless river that flows from nothing to
the infinite, the only thing that modern physics has left unscathed is the
instant.
Gaston Bachelard explains it much better than myself in a memorable essay
he published in 1932 entitled The intuition of the instant. In it,
Bachelard gives a fiery defense of the thesis that a countryman, his namesake
and colleague, Gaston Roupnel, brought against the, at that time, widely
accepted theory of duration of Henri Bergson, which was published in a book
called Siloé: "Time has only one reality, that of the instant,"
says Roupnel, and Bachelard concludes, "Time is a reality compressed into an
instant and suspended between two voids."
On the one hand, to confirm the unnameable reality of the fleeting
instant, Bachelard bases himself in the physics of his time, in particular upon
quantum mechanics, which in those days was radically changing our conception of
the macro- and micro-universe. Time, ever since long ago, it is worth repeating,
had remained tethered arbitrarily to the speed of light, with the result that
the study of quanta was seen to be constrained to take place within a minuscule
packet of energy. The duration that we measure with a clock is an illusion;
just as the continuous light that we see flowing from the sun or a light bulb
is an illusion. In reality what we see is an almost infinite string of
minuscule quanta of light or photons, in the same manner that the duration we
perceive with a clock is in reality an almost infinite string of minuscule
quanta of time or instants.
On the other hand, precisely there, "on the edge of that instant"
(another happy verse that emerged from the pen of the great Mexican bard), in
that flickering (to continue with Paz: "A blink is enough/ All plunges into a
fathomless eye/ A blink is enough/ All re-appears in that same eye") Bachelard
finds a limitless kernel from which to extract ethical, aesthetic and even
moral consequences.
If all that exists is the instant, the present instant, the present, then
the past instant, the past, already does not exist, is only a memory; since
the future instant does not exist either, the future is only longing. And,
this is the most notable, in each present instant our life re-commences; in
each present instant if we have the courage (a word most appreciated by
the French thinker) to confront it face to face, we can either continue the
habit that created our past instants and make ourselves a simple bridge so
that the customs which we have clung to are extended to the future instants,
or else we can challenge that old habit and propose to enter into some novel
future instants, which will be constructed around our most profound desires.
In any present instant we can, then, cease being what we do not wish to be and
begin being that which we want to be. Our lives are not irremediably tied to
our past; we are not victims of it, as Freud thought; it is more that we are
victims of terror at facing the vertigo of the present and daring to convert
our dreams into future instants.
It is never too late, the philosopher tells us, to take this crucial
step; it is never too late, we add, while one has not arrived at that unique
instant that is not followed by another in our existence. Perhaps, as
Heidegger thought of it, if we were more conscious that that fatal instant
could be the next, we could accomplish with less fear the maxim of the French
philosopher: "The being who approaches life, drunk with novelty, is also
disposed to treat the present as a promise of the future. The greatest of the
powers is ingenuousness..."
29 September 20034. Contradictory complements
THE ROMANTIC spirit and the scientific spirit seem antagonistic. While the
first questions and rejects the world that the economy of the free market has
laboriously constructed, the second is the most prized and favored adoptive
son of capitalism. After having passed many centuries sheltered in the minds
and in the laboratories of eccentric persons, in the best of cases, when not
alienated and sinister, or worse yet, the impious, agnostic and blasphemous
who have dared to place the Sacred Scriptures in doubt, science was received
with enthusiasm by the world that was forming at the end of the 18th century
and the beginnings of the 19th, when the triumphant masters of capital
discovered the enormous potential of scientific knowledge and, in particular,
of its eternal counterpart, technology, to generate riches.
In that same era, when the study of the ideal efficiency of a thermal
machine, or of the capricious impulses emitted by the contact of certain
metals with brine that are capable of straightening the legs of a frog, become
themes worthy of treatment in the most famed European universities, the first
Romantics recoil from this new bourgeois order, pitiless, utilitarian and
insensible, which begins to rise over the horizon. And their art takes refuge
in the mythical past, or on a level of places where fantasy and imagination
have always reigned, and they conduct their lives in congruence with their
rebellion; they are indomitable, bohemians, scandalous, tragic and anarchic;
the exact opposite of that dictated by bourgeois morality, the epitome of
hypocrisy and restraint. Many of them, following the example of the brilliant
Lord Byron, clearly allowed their lives to consume them like a lightning flash
of magnesium while they were still very young, whether it was by neglecting
their health and forcing it into all the excesses (tuberculosis, which thrives
upon weak and malnourished bodies, became the paradigmatic illness of the
Romantics), or uprooting it by their own hands.
It is precisely with the author of one of those magnificent suicides,
although in this case we deal with the immolation of his literary character,
the young Werther, of whom I wanted to speak. And it is that Goethe, one of
the undeniable founders of the Romantic movement, manifested during the length
of his long life, especially in old age, a live interest in science.
At the time, it was not the pragmatic aspect of scientific know-how that
interested the poet of Frankfurt. In his day science still was not completely
hijacked by insatiable capital, and technology still was not a fundamental
variable in the pricing of market value, each still could be seen as an almost
magical form for comprehending and transforming nature (a vision that, despite
everything, is found still in the spirit of the true scientists). For Goethe,
nature and its manifestations represented an enigma worthy of captivating the
most Romantic spirit. To the end and throughout, we find in it beauty, life,
light and color, a triad which absorbed the mind of the poet throughout the
length of his existence. To discuss beauty, he counted on poetry; to discuss
existence, he relied on his own life and upon the novels; to discuss light and
color, he turned to science. Among his vast literary work embodied in poetry,
the novel and the humanistic essay, there appears, in 1810, when the poet was
about 60, the Theory of Colours, a book which couples science with
poetry, thus demonstrating that no product of human construction is the result
solely or reason or the imagination, that the Romantic spirit and the
scientific spirit, far from being antagonistic, are complementary, as are
certain colors.
8 October 20035. The scientific artist
"In 1810 Goethe published first a small notebook called Contributions to
optics, in which he expounded his objective experiments in the matter, and
soon another little notebook equally lettered, referring to the subjective
experiments, both accompanied by illustrative panels," we are informed by
Rafael Cansino Assens, careful biographer of the poet of Weimar. These
notebooks, enriched with new observations, were transformed into the Outline
of a theory of colour, Goethe's most ambitious scientific publication.
Why was a spirit who had tired of demonstrating his poetic vocation so
interested in the phenomena of the physical world? What led Goethe to enclose
himself for long hours in a darkened shed to study with some rustic
instruments, many of them conceived by he himself, the ray of light that
penetrated a small orifice made in the beams which surmounted the windows? What
was it that obliged the creator of Faust to repeat the experiments he
read described in whatever manual or treatise on optics that fell into his
hands? Why did the Romantic poet have the confidence to criticize and even to
put in doubt the veracity of the thesis of the great Newton, unchallenged
authority on the physics of his time, which won him the scorn of his
contemporaries? Was Goethe perhaps a frustrated scientist?
The last question may be the simplest to answer. Goethe was not a
frustrated scientist, nor ever pretended to be a scientist. He was above all a
poet, and a poet in the broad sense of the term: he was an artist, a creator,
one of those marvelous creatures who dare to emulate God. And it is his poetic
spirit that brought him to approach the mysteries of the creation, as much
those of human life as of nature in which it lives.
At some point in his youth Goethe thought of becoming a painter, yet soon
understood that the pen and not the brush was the vehicle which nature offered
him for expression; nevertheless, his interest in painting never disappeared,
nor in the colors that make it possible. His obsession with light, originally a
poetic metaphor, became an obsession with physical light and it most beautiful
attribute, color. And so, his interest in this natural phenomenon was more
aesthetic than scientific, or put differently, his interest in aesthetics
caused him to rummage around in the physical world.
In the true scientific spirit, Bachelard would say, the artist always
observes. The scientist and the artist are much nearer than it seems: they
connect the imagination, curiosity and doubt; finally and throughout, the
activities that both realize are products of the same human brain.
And that spirit is what carried Goethe to investigate the nature of light
and color, and to obtain surprising results. Strictly, many of the criticisms
he made of Newton are justified. The German poet demonstrated that the learned
Englishman had made some slips, at times reporting his results, not according
to the data from the experiment, but instead as they should have been found to
conform to the global theory. A sin that many scientists commit, but which no
one had discovered in Newton until a universally acclaimed aged poet took the
trouble to repeat, one by one, the experiments that Newton reported in his
Optics.
However, in his critique of the basic theory of the Englishman, Goethe was
mistaken. Colors are, in effect, the result of the different refractions there
are in a ray of white light, as Newton affirmed, and not the result of
collisions between light and shadow, as the poet of Weimar thought.
Strictly speaking, neither of the two was correct. Light turned out to be
something much more complex and marvelous than that which both believed.
15 October 20036. And God saw that the light was good
THE BIBLE recounts that the first day of that exhausting week when God created
everything existent, after having made the sky and the earth, he then created
light, with the goal of lifting his first creations from the obscurity in which
they were wrapped, "and God saw that the light was good, and separated the
light from the darkness."
Modern physics recounts another story to us, perhaps more fascinating:
around 13,000 million years ago the universe was comprised of a gigantic black
hole, a sphere whose radius would not be much greater than the distance it is
from Mars to the sun. Light still did not exist, for the gravity of a black
hole is so tremendous that it does not allow anything to escape from within
itself, not even the light, something so quick that, if it could be in repose,
would not have mass.
Eventually, perhaps overwhelmed by the incommensurable amount of energy
that accumulated in its innards, the great hole exploded...and the light was
created; and the light then began its eternal pilgrimage extending itself at
its pressured pace the limits of the universe. Even today in daylight it is
possible, the astronomers inform us, to detect the echo of that formidable
explosion; it is possible to detect, with radiotelescopes whose sensitivity it
is impossible to imagine, the tracks that that primeval light has left in the
corners of the universe.
Our intuition tells us that light is a manifestation of matter. Thus, the
great ball of hydrogen which is the sun emits torrents of light in consequence
of the nuclear reactions that occur in its interior, in the same way that the
chemical reactions that occur in the head of a match emit light, or the passage
of electrical current through a fine tungsten wire. But in this instance, as in
many others, the reality proceeds to belie our intuition: strictly speaking, it
would be fairer to affirm that matter is a manifestation of energy and,
ultimately, of light.
When the Big Bang occurred, at first there was only light; and it was from
that energy, through its intricate collisions with itself, from which the first
quarks emerged, those minimal particles that shape the atoms.
Thus, the atoms that comprise what we call matter, would come to be
something like black holes which accommodate a quantity of energy absolutely
disproportionate to their mass. At the beginning of the last century, Einstein
demonstrated theoretically that a few grams of matter are sufficient (if one
could liberate the energy they contain, or better yet, containing them) to
supply electrical current to a small city over several days. The unhappy
inhabitants of Hiroshima and Nagasaki had the misfortune of proving Einstein's
theory: a few grams of uranium, in the first case, and of plutonium, in the
second, were more than sufficient to bring Dante's inferno to those cities,
with all its paraphernalia.
I weigh 185 pounds. Sometimes I shudder to think of the huge quantity of
energy that resides in my body. There would have to be an explosion of some
10,000 hydrogen bombs to give me an idea of what in reality is within me...
Certainly Goethe, who loved it so much, who even called to it on his
deathbed, as if dealing with a lost love, would have liked to know that we are
creatures consisting of light trapped in our atoms, and that light, as the god
of the Bible discovered at the beginning of his colossal work, is a good thing.
It is too bad that we emit so little.
23 October 20037. World of color
THE STUDY where I work is located on a second floor. In front of my writing
table there is a great window through which I can observe, on the ground level,
a eucalyptus, a tamarind, a guava tree, the top of some papyrus, the leaves of
a palm tree and the foliage of a bougainvillea climbing on a lattice. Further
back, in the distance, I see the mountains of the Neovolcanic Range, with the
three soft peaks of the Marias at the height with the color of slate. Above, at
the very top of the sierra, there is a cumulus cloud, grey in its lowest part
and bright white above. Atop them, the soft blue sky, diaphanous, almost
transparent, as if announcing the infinite existing beyond it. The predominant
color in this handsome picture is the green of the plants, perhaps punctuated
with the red--between purple and crimson--characteristic of the
bougainvilleas.
Although I know that the colors that arrive at my retina are the result of
the white light of the sun which falls upon the objects and which is reflected
toward my eyes after having left in them a portion of its spectrum (the
chlorophyll of the plants, for example, thanks to the iron it contains, absorbs
all the colors there are in visible light, except for green), I do not cease
marveling and being surprised by the amazing capacity that our eyes have for
discerning colors: I can distinguish without any difficulty between the
different shades of green of the foliage of each of the six plants. And I can
do so despite the difference between the shade of one or the other being less
than one hundred-millionth of a millimeter in the wave length of each green
color that arrives at my retinas.
There is no room to doubt that we are animals of vision. If we had, for
example, in the case of the sense of hearing a similar capacity to that we have
for discerning shades of color, we could easily hear the steps of an ant or the
heartbeat of a lover who was 100 meters away. If our sense of smell had the
sensitivity of vision, the dogs for detecting drugs or weapons in airports
would not be necessary; any guard could do it without a problem. In the case of
touch, we could read with our hands, for the thickness of the stain of ink upon
the paper could be easily detected with the tips of the fingers. And, finally,
in relation to the sense of weight (that sixth sense which we possess and to
which so little attention is paid), there would be no need for scales in the
food stores: it would suffice for us to handle the merchandise to know exactly
what is its mass.
In truth, science has not yet explained in detail what mechanisms occur in
our sense of sight which permit us that astonishing ability to distinguish
colors (we have already once said that the scientists, in contrast to the
politicians, are not repelled from admitting their ignorance). Be that as it
may, the fact is that we are, I insist, basically animals of vision, and our
vision of the world, and the language by which we express it, is filled with
colors. I ask: if instead of being animals of vision we were those of
olfaction, what adjective would we give to red literature, or to black; what
would we call the yellow pages of the directory; what title (and what content)
would Rubén Dario have given to his immortal poem Blue; what would
blank verse be called; what name would Stendhal have sent to the printer of his
Red and the black; would we discriminate among persons by their aroma;
in place of flags, would nations have censers; what would be the smell of evil
and what of good; what that of the lute and which for happiness? What odor
would hope have?
30 October 20038. A true Romantic
WE HAVE mentioned Goethe as a paradigm of the poetic spirit interested in
scientific thought. We now view an opposite case; that is, the case of a
devotedly scientific spirit who lived a Romantic life, in the most tragic sense
of this term. We refer to Ludwig Boltzman, an Austrian physicist who was born
in Vienna in 1844.
From extreme youth he distinguished himself as an individual who possessed
that type of alert, inquisitive and penetrating intelligence which nature,
always so miserly, bestows on very few. By 1866, when he was only 22 years old,
he had already obtained the rank of doctor in physical sciences from the
university of his native city. From then on he dedicated his life to research
and teaching, offering classes in Vienna, Munich and Leipzig. He occupied his
free time in accumulating amorous disappointments. The almost devilish ability
he showed throughout the length of his existence to untangle the mysteries of
the physical world was compensated by a definite incapacity to comprehend the
not less mysterious feminine nature.
Interested, like almost all the physicists of his time, in the energy
processes related to thermal machinery, he was the first to propose the
mathematical foundation for a recently discovered thermodynamic property and it
had overturned the theoretic apparatus of classical mechanics: entropy. For
Boltzman, entropy, that tendency towards disorder that the molecules which
comprise a system manifest, which is associated with the generation of a sort
of degraded energy that cannot now be useful for performing work and which
condemned the universe to thermal extinction, only could be explained in
statistical terms. Acute reasoning, solidly based, led him to conclude that the
entropy of a system is in direct relation to the logarithm of the population
of molecules that comprise the said system. Put otherwise, the entropy is the
sum of the probabilities of the various types of movement, apparently chaotic,
that the molecules of a system display.
When Boltzman proposed his theory, in the scientific world the idea was
universally accepted that all the laws of physics could be explained in
absolute terms; it was unthinkable, and even heretical, to suppose that nature
might manifest itself in an episodical manner, that the results of its actions
had to be subject to the rule of probabilities. Whoever would dare to postulate
that possibility was condemned to be considered a madman, in the best of cases,
and a fool, in the worst. Boltzman dared to postulate that possibility and they
did treat him as a madman and a fool. The scorn which that man suffered perhaps
might only be comparable to that suffered by Galileo and Giordano Bruno three
centuries previously; but, in the final account, these latter were vilified and
humiliated by the chiefs of the Catholic church, an institution that could not
be more distant from science. However with Boltzman it was his own colleagues
who took upon themselves to convert his life, in itself nothing pleasurable,
into a calvary; something that doubtless would have made the suffering of this
man even greater.
Not many years passed such that, after the discoveries of Plank in 1900
and of Einstein in 1905, probability would have a definitive reception in the
theoretical body of modern physics. Of course, Boltzman was the first
vindicated when the air of uncertainty was imposed on the thought of the men of
science; the scorn was transformed into homage.
But it was late: sick and exhausted, Boltzman shot a bullet on the 5th of
September of 1906. Upon his tombstone, in the manner of an epitaph, is engraved
the equation of entropy for which he literally gave his life. A true Romantic.
7 November 20039. We are nothingThe universe is nothing more than totality not being what I know
it is.
PAUL CLAUDEL
WHEN it was our eyes that established the magnitude of distances and the nature
of things, the universe was relatively small and furthermore was finite. Even
though the observer of the firmament during a clear, moonless night has the
sensation that there are an immeasurable number of stars, in reality there are
rather few: something more than 6,000. And they had been counted since dozens
of centuries ago.
Thus then, the sun, the moon, the earth, six planets, various comets, and
around 6,000 stars comprised the universe. There was sufficient space out
beyond the sky to accommodate the kingdom of the heavens and in the entrails of
the earth to have room for hell. The dimensions of the totality were so reduced
that the grandeur of God could be sufficiently measured by the ocean, for, as
the theologians informed us, the magnitude of the Creator was so astonishing
that in a fold of his eyelid there could comfortably fit the entire oceanic
sea. Compared to the dimensions of the universe which today are known, that
medieval god would be tinier than a bacteria.
With the invention of the telescope the universe began to grow and man to
shrink. When the firmament is observed through a telescope with a refraction as
rudimentary as that which Galileo used, the number of stars that are seen is,
in effect, immeasurable and new celestial bodies also appear, whose existence
had not been suspected yet which had always been there, like some of the moons
of Jupiter.
In the times of Voltaire, the universe was already a respectable
conglomerate of millions of celestial bodies, our sun being only one more of
them, and not, certainly, one of the greatest size. Nietzsche affirms that it
was in that epoch when God died, perhaps overwhelmed by the dimensions of that
which he supposedly had created and without doubt dejected by the formidable
power of Reason, which, in the final analysis, occupied his place and rules
from then on in an even more despotic way than its predecessor.
The age of Reason is the age of capital, of machines and technology. And
the latter is responsible for the improvements that were gradually being made
in telescopes. The observations that were made in the formidable reflecting
telescopes fabricated during the first decades of the last century achieved
spectacular results: it was discovered that our known universe, the Milky Way,
was only one amount hundreds of billions of conglomerations of stars called
galaxies. Thus, the universe turned out to be hundreds of billions times larger
and ourselves, in consequence, an equivalent amount smaller.
Yet the matter does not seem to stop here: the recent observations of the
Hubble telescope have caused the astronomers to suspect that the dimensions of
the universe should be revised upwards. Some even think that probably the whole
known universe is not more than a conglomeration of hundreds of billions of
galaxies that might be seen as a luminous point (like a quasar) from a
neighboring universe. Soon we shall know is this supposition is correct. And if
it were, if the universe were hundreds of billions of times larger, we would
remain chillingly close to nothing; and certainly Reason, our tyrannical
goddess, then should cede her place to something more powerful that could cope
with so much grandeur. Perhaps it could be Unreason. Maybe its reign has
already begun and we still have not noticed.
1 December 200510. In the field of the physical field
SCIENCE, especially physics, has taken many words of the common language to
denote the phenomena that it studies. Thus, terms like force, work, potential,
resistance, capacity, and even mass acquire, in scientific discourse, a
different meaning from that which we are accustomed to attribute to them. It is
true that the opposite also occurs: some words which the men of science have
proposed to denote events or objects observed in their field of knowledge have
translated themselves into common speech with a meaning not always
corresponding to the original. Words such as energy, entropy, valence, gene,
clone, electricity, or fractal continually appear in our discussions and
writings.
Out of the set of these ambivalent words, to call them something, there is
one that always has commanded my attention, as much for the meaning, or
meanings it has in everyday speech as for that in the lexicon of physics:
field.
The Dictionary of the Spanish language informs us that the word
comes from "Lat. campus, a flat plain, field of battle." Field, then,
we call an open space, extended and delimited, as might be an area for playing
soccer; but we also call field that which is beyond the limits of an urban
area, the open space, populated with trees, mountains, fierce animals and
sometimes peasants, the span between one city and another. Alternatively, the
field can be an imaginary space which has room for diverse objects, concepts or
themes that have something in common: we speak of the field of science, the
field of literature, the automotive field, the field of communications, etc.
Here, the initial idea of open space becomes something wide and spacious yet
also closed, closed upon itself; therefore, under this usage, the word
field is synonymous with sphere.
In physics, the word field, without losing its association with a space
and a sphere, acquires a more enigmatic and suggestive meaning given the
contradictory nature that is implicit in the concept. To explain: the same
Dictionary of the Spanish language defines the term field as is employed
in physics as a "magnitude distributed in space, through which actions among
particles are exercised at a distance, such as the electrical field or the
gravitational field." Thus then, the physical field is the space wherein action
at a distance between the material particles is manifested, yet is, at the same
time, that manifestation. When we observe an object, a tree, for example, what
arrives at our retinas is the result of the interaction between the
electromagnetic field of the light that falls upon the object and the
electromagnetic field that the object itself emits. The light that the object
does not absorb during that interaction is that which, reflected, reaches our
eyes. What we see is the space that that body occupies which at the same time
is filled with the energy it emits; that which we see, ultimately, is the field
of that object.
The same again occurs with touch: when we caress a beloved being, in
reality we never make contact with the particles that comprise her body; they
are so incredibly small and are separated by such incredibly great spaces, that
it is practically impossible to be able to touch them with our own particles,
which are equally small and are equally separated. Such that, if the particles
that give body to our beloved, and to ourselves, did not fill the space they
enclose with a field of force, our hand would cleanly pass through the
beloved's body as if it were a ghost. If the particles that exist in the sun
did not create a field of gravitational attraction, Earth would never have
remained trapped beneath its beneficent light and heat, which also are fields.
The absence of the physical field is, it may be, the best definition of
the slippery philosophic term of nothingness.
4 April 200411. The peacefulness of anonymity
OVER THE LENGTH of a life it is almost a rule that we remember one of the
elapsed years as very special or unique. For one reason or another, be it
tragic or joyful, there are years that are not forgotten. Without a doubt, for
Albert Einstein that of 1905 was one of those.
In 1901, at only 22 years of age, and recently graduated from the Swiss
Polytechnic Academy, he established himself in the city of Bern. There he found
employment that for the run of humanity would signify a well-remunerated
activity which requires at least eight hours a day to accomplish. It was in the
post of examiner for the Patent Office of that city.
Upon the desk of the young physicist there regularly accumulated very
singular documents: technical descriptions of strange apparatuses accompanied
by a set of plans, schemes and protocols that illustrated the descriptions. The
work of examiner consisted in studying those documents, decoding the intricate
plans and schemes to opine, in the end, whether the proposed invention was
effectively useful and whether, in its design and functioning, it was not a
plagiarism of another already patented invention.
We said above that for ordinary mortals such work would require one's full
time, but not for Einstein. His special intelligence allowed him to decode the
mechanism of any invention, utilizing only a fourth part of the time that any
other person would require. Thus, in each workday during those mild years
between 1901 and 1905, Einstein dedicated six hours to meditate and reflect in
turn upon his favorite study term: physics. With the copy of some plan spread
across his desk, which he appeared to observe intently so that his colleagues
and his boss believed he was absorbed in understanding an invention, his mind,
in reality, was churning in the arcane mysteries of electrodynamics, quantum
mechanics and Brownian movement. He did not need to take notes nor put the
complicated equations that skimmed through his neurons onto paper; he had the
ability to manipulate them mentally. Upon arriving home he took advantage of
the hours after dinner and before retiring to draft that which he had
contemplated during the day. The fruit of that calm labor, which he realized
far from the academic hallways, where he was perfectly unknown, he published
in five brilliant scientific articles in that memorable 1905. Three of them
(that referring to the mathematical explanation of the movement of particles
in suspension; that in which the discoveries of the recent quantum theory are
applied for the first time to take account of the photoelectric effect, that
is, the phenomenon through which luminous energy is converted into electric
energy; and lastly, that which seated the groundwork for special relativity)
represented a revolution in the world of physics comparable to that unleashed
by the great Newton 200 years previously.
Beginning with those publications, Albert Einstein forever abandoned the
peaceful anonymity in which he had lived up to then and became, very quickly,
the archetype of the man of science in the 20th century. His fame became
universal and his peculiar personality legendary. His fertile mind did not
stop working until the last day of his life, and the contributions that he
made to the science he loved so much are many, yet, as he himself recounted.
the marvelous peacefulness in which he live the first years of the century,
when he was completely unknown, immersed in cryptic plan and schemes which he
feigned observing attentively while his mind traversed the limits of the
universe, that calm he now would never again discover.
19 October 200512. SimplicityThe most incomprehensible thing about the world is that it is
at all comprehensible.
Nature hides her secrets because of her essential loftiness, but
not by means of ruse.
ALBERT EINSTEIN
IT MAY seem paradoxical, but the paradigm that guided Einstein throughout his
entire scientific work was simplicity. He was firmly convinced that the
universe encloses some general laws in its breast, diaphanous, simple and
beautiful, which make sense of everything that happens, that has occurred and
that will occur in it. The work of the scientist consists in revealing them.
And that they have been doing for at least 2,000 years. It so happens that the
master plan which the "Old Man"--to use a term that Einstein enjoyed--guards
so jealously is not easy to divulge; as Heraclitus has well put it, "Nature
likes to hide." What the men of science have done is to advance by removing
one by one the veils that wrap those universal laws. Thus, when at the end of
the 17th century the great Newton proposes his theory of gravitation, it
seemed that, at least concerning the movements he details, the discovery of
the key to that master order had at last been achieved.
More than two centuries later, Einstein, appealing to simplicity,
demonstrated that Newton's theory is no more than another veil behind which
the true plan hides. The German never appreciated the existence of space with
absolute referents propounded by the Englishman as the basis for his theory of
uniform rectilinear movement. There was no way to experimentally demonstrate
the existence of such space and, what was the most important for Einstein, the
master plan ought to be simpler, that is, should apply to whatever system of
reference, a system for which to say "the earth revolves once a day" should be
equivalent to saying, "the heavens daily revolve around the earth."
The special theory of relativity resolved the problem: it does not
require that the existence of a space of absolute referents be postulated; it
accounts for rectilinear movement in any system and the laws of physics remain
invariable. And not only space lost its absolute character, but also time
itself: Einstein demonstrated that time, far from being a continuous and
independent transformation in which movement occurs, is a physical variable
associated with this, which can shorten or prolong itself according to the
reference system from which it is measured. The only thing postulated as an
absolute in the special theory of relativity is the speed of light in a
vacuum, a phenomenon that has been experimentally proved innumerable times.
Einstein lingered more in publishing his theory than in focusing his
activity upon the comprehension of a more complex phenomenon: the curvilinear
and accelerated movement of the celestial bodies. Here apparently there was
nothing objectionable in Newton's theory. The planets and the stars move with
chronometric precision in accordance with that theory. Perhaps the movement of
Mercury from time to time deviated from the standard of Newton's laws, but in
such small magnitudes that it could well be treated as measurement errors. It
was not this phenomenon that provoked Einstein's interest; of late it was a
supposition of Newton's theory that contradicted the famous simplicity which
the famous German required of the theories of physics. In his laws of
movement, Newton asserts the existence of inertial mass, that is, the
magnitude with which any body opposes a change in its state of movement or
repose. The greater the mass, the greater the force that must be used to move
it. Elsewhere, in his theory of universal gravitation, Newton proposes the
existence of gravitational mass of bodies; this is, that magnitude by which
one body attracts another. In principle, these two magnitudes are different,
but in fact have the same value. Why this may be so has no explanation in the
Newtonian theory; it would be, in the final analysis, a happy coincidence. Yet
Einstein, as opposed to Malcolm Lowry, disliked coincidences. For him it was
evident that there should be a simple and logical explanation that would
account for the equivalence between the two magnitudes. He dedicated ten years
of his life to the search for such an explanation.
And in effect, the general theory of relativity which Einstein proposed
in 1915 to take into account the contradiction between inertial and
gravitational mass is, in logical terms, much simpler, but indubitably more
complex in its mathematical formulation and almost impossible to demonstrate
experimentally. The theory, building on what he had discovered in special
relativity, postulates that there is no difference whatsoever between inertial
and gravitational mass simply because they are the same thing: in the same way
that Maxwell demonstrated that electrical and magnetic phenomena can be
explained in terms of the field of forces they generate, the mass of any
object generates a gravitational field that has the ability to curve the space
that surrounds it. The planets revolve around the sun because they are trapped
in its gravitational field, and not only in a spatial continuum but also in a
temporal one: the gravitational field is manifested not in three dimensions as
in Newtonian space, but in a fourth, the three already familiar spatial ones
and a fourth spatial-temporal dimension which causes an event to be unique in
any system of reference. Because there is mass in it, the space of the
universe must be curved; and if it is, it should have a limit. For the first
time in the history of physics, a theory establishes the possibility of
measuring the universe.
Of course, and as occurred with the special theory, when the velocities
that are in play are small compared to that of light, Newton's theory of
gravitation is converted into a special case of the theory of relativity.
Thus, everything that Newton advanced is conserved, at the same time that
phenomena which depart from his theory, as in the case of Mercury's movement,
now are explained by Einstein's new theory: at some point in its orbit,
Mercury approaches so close to the sun that the effect of the star's
gravitational field upon its trajectory can be observed and measured.
But the theory had another consequence which also could be measurable:
the gravitational field, if it is very intense, is capable of bending the
trajectory of light itself. In 1919 there was to occur a total eclipse of the
sun that was not expected to confirm or refute the theory of the German
physicist. Directed by the English astronomer James Jeans, a team of
scientists went to a place in Africa where the eclipse could be observed in
its maximum splendor. They photographed the light of the stars nearby to the
sun and determined their trajectory. They confirmed, without a doubt, that the
rays of light provenant of those stars had deviated from their trajectories
due to the gravitational field of the sun to just the degree that had been
predicted by Einstein's theory.
When Jeans' experiment was made public, an unprecedented phenomenon of
social communication occurred: the print media of those times (1919), perhaps
fed up after four years of transmitting information about the Great War,
discovered in the theory of relativity and in Einstein himself a rich vein for
attracting readers. It became fashionable to speak of physics and of the
scientists. The "Swiss Jew," as the English called him, had revolutionized our
conception of the universe, the journalists affirmed, with
unforeseen consequences. Thanks to an English astronomer, who verified the
theory of the wise Swiss Jew, they insisted, the world would not be the same.
Now we know that even the universe has a limit. Announcements like this
appeared in surfeit in the European press, accompanied by erudite editorials,
in which (with all seriousness and despite that very few people in all the
world rightly understood the new theory) they discussed the potential of the
theory of relativity in the field of social science, philosophy, religion, and
even mysticism.
Einstein, then, is the first scientist in history to attain the fame and
the popularity of a movie star or of a boxer, something which, on the other
hand, never pleased him: "Like the man in the fairy tale who turned everything
he touched into gold - so with me everything turns into a fuss in the
newspapers," he said with a certain sadness to his friend, the physicist Max
Born, in a letter from 1920.
With this achievement behind him, the "Swiss Jew" or the "brilliant
German scientist," as his countrymen called him (in post-war Germany Einstein
became something of a symbol: "Even if the English defeated us with arms," one
could read in a newspaper of the day, "we have defeated them with
intelligence: a son of Germany has corrected and superseded the Englishman
Isaac Newton." That would not last long, and already at the beginning of the
1930's viewing the advance of the National Socialists, Einstein commented to a
friend: "Soon I shall become a Swiss Jew here") did not interrupt his work at
the University of Berlin where, certainly, he never felt comfortable, now
focused on a struggle that would last the rest of his life and in which he
could not prevail: to demonstrate that the recently founded quantum theory was
intrinsically erroneous, for it contained at its core a principle that
Einstein's idea of simplicity could not accept: that of uncertainty.
26 October 200513. Determinism and uncertaintyGod may be subtle, but he isn't mean.
ALBERT EINSTEIN
IN 1927 THERE re-united in Brussels, as they had done every three years, the
most distinguished physicists in the world to plan and discuss the most recent
advances in their science (the so-called Solvay Conference). There, the Danish
Niels Bohr expounded before the participants the fundamental principles of
quantum mechanics, a new science that precisely described the phenomena
that occur in the world of the atoms.
The theory rested upon a fundamental principle that recently had been
proposed by the young German physicist Werner Heisenberg, and which could be
summarized as follows: it is impossible to determine with absolute precision
and in a simultaneous manner the position and the quantity of movement
(product of the mass and the velocity) of a photon or of any subatomic
particle. That is, the greater the certainty one has of the position of a
particle, the greater will be the uncertainty with which its amount of
movement is measured, or the reverse. The degree of uncertainty is
proportional to a fundamental physical constant which Max Planck had proposed
25 years previously and whose value was experimentally established in an
uncontroversial way.
One immediate consequence of this theory was that, as had happened before
with thermodynamics thanks to the work of Boltzman, again a statistical method
was employed to describe certain natural phenomena. The old determinism on
which classical mechanics was based, and including the theory of relativity,
could not account for the subatomic world; the certainty of finding or
describing a physical property needed to be replaced by the probability of
doing so: it is possible to establish the maximum probability of finding an
electron in a given place at a given velocity, for example, yet it is not
possible to establish those coordinates with absolute certainty. Nature now
presented itself as not only hidden, but also busy.
Confronted with the impeccable arguments, as much theoretical as
experimental, with which Bohr supported his presentation, all the
participants
at the conference accepted the new theory and prepared to struggle with
probability in their future investigations. All except one, who, curiously,
was the most famous and respected: Albert Einstein.
Einstein did not accept that nature might behave in that way. His
celebrated phrase, "God does not play dice," was a concise, but precise, way
of expressing the conception that the German physicist had of the world of
physics. He accepted quantum mechanics as an approximation, as another veil
behind which the Master Plan we cited previously hides, yet in no way did he
consider it a theory that would account for what truly happened in the deepest
of microcosms. Behind it there should have been a simple, elegant and precise
theory that would describe those phenomena without the need for appealing to
probability or uncertainty. In other words, Einstein never accepted that
chance should be an innate characteristic of the physical world we inhabit.
There should be a determinate order in it, everything being a question of
discovering it; behind the veils of uncertainty, the certitude that it is
possible to fully determine what occurs would reappear triumphantly. He
dedicated the rest of his life to a search for that certainty and was
unsuccessful; however he never recognized that his lack of success implied
that the universe is probabilistic.
Nevertheless, and even though it would weigh on Einstein, everything
seems to indicate that God, in effect, plays with dice.
2 November 200514. Wave or corpuscle
IT IS KNOWN that this year, 2005, has been dedicated around the world to
physics, with the impetus of the publication, in 1905, of three articles by
Albert Einstein which revolutionized that science. Indubitably the most well-
known of the three is that which expounds the fundamental principles of the
special theory of relativity. It seems right to me that it should be so, given
that that theory, in radically changing the conceptions concerning time and
space that were held up to then, postulates the enormous quantity of energy
that accumulates in the mass of a body in repose; energy that explains the
light and the heat that we receive from the sun and which was experienced in
the very flesh of the unhappy residents of Hiroshima and Nagasaki 60 years
ago.
However, even though they are not as spectacular as that dedicated to
special relativity, the other two articles, the one focused on the
mathematical description of Brownian movement and the other an explanation of
the photoelectric effect, turned out equally valuable in the construction of
two crucial fields of modern physics: mechanics and quantum mechanics.
I shall refer now to the photoelectric effect. The nature of light is a
theme that intrigued men of science since very remote times, although its
formal study (that is, in the terms we understand today as scientific)
commenced at the end of the 16th century with a singular experiment performed
by the indefatigable Galileo and whose intent was to measure the speed with
which light travels: one night he placed an assistant atop a mountain in the
region of Tuscany armed with a lantern. Galileo, equipped also with a lantern
and a timepiece, located himself on the crest of another mountain, some miles
distant from the first. The idea was that when he would allow light to be
emitted from his lantern he would begin to record the time on his clock; at
the moment when the assistant would see the light emitted from Galileo's
lantern, he would uncover his own lantern such that, when the wise man would
receive the light transmitted by his assistant, it would measure the time
taken by the luminous ray in travel and return. It was enough to divide half
of this time by the measured distance between the two mountains to calculate
the speed of light. In the end, the experiment failed: light moves with very
much more velocity than was suspected by Galileo; such that, his assistant
should have been located on the moon so that Galileo might have a couple of
seconds to measure on his clock. Almost a century later, in 1675, the Dane
Olaus Roemer managed to calculate this cipher, using the eclipse of a
satellite of Jupiter as his "distant witness." From then on light was
considered a physical phenomenon that could be measured and observed. Also
beginning then there ensued a polemic that lasted more than 100 years about
the physical nature of light: there were many who thought they dealt with a
wave movement; others, who were fewer, affirmed that it was transported in
corpuscles.
Isaac Newton was the most distinguished defender of the corpuscular
theory of light; his principal opponent was the Dutch Christian Huygens, a
highly regarded scientist of his time, who was firmly convinced that light is
propagated through a wavelike motion, similar to that of sound or to the waves
in water. But the authority of Newton, above all in the last years of his
life, was practically absolute, such that his viewpoint was imposed upon
almost all the scientific circles of the 18th century. Goethe himself, as we
have already noticed in another section, shared this theory with the great
English genius, although in almost everything else referring to optics he held
very different conceptions from those of Newton.
At the beginning of the 19th century the British scientist Thomas Young
(to whom we also owe the meaning that the word energy actually has)
performed some experiments with luminous beams which achieved very enigmatic
results; so much so, that not even Young himself was capable of explaining
them: he observed that if one causes a beam of light to pass through an
orifice or collide with an obstacle, the beam is distorted, such that the
contour of the shadow it projects is not perfectly sharp, but instead that it
has a softer edging composed of luminous stripes and dark stripes (a
phenomenon that is known as diffraction of light). A little later, in 1814,
the French physicist Jean Fresnel, who curiously was born in the town of
Broglie (below it will be seen why that fact is curious) would demonstrate
that the phenomenon discovered by Young could not be explained in the terms of
the corpuscular theory, for if light were a flow of particles, upon colliding
with a body, the particles would rebound or else the particles that do not
collide with it would be free, such that the shadow that is projected should
be perfectly sharp. On the other hand, if the light is propagated in the form
of waves, then it would be distorted upon hitting the object, just as occurs
with waves in the water or with sound. Later this same Fresnel demonstrated
that two beams of diffracted light, as also occurs with waves of water and
with sound, are capable of forming dark lines as of interference upon entering
into contact. In terms of the wave theory the phenomenon of interference is
simple to explain: when the crest of one wave coincides with the valley of
another wave, they annul each other; or else, when crests coincides with
crests or valleys with valleys, the waves augment in size, which explains the
dark lines next to the brilliant ones that form when two beams of diffracted
light coincide. In terms of the corpuscular theory it is impossible to explain
this phenomenon.
Thus, from then on no respectable scientist placed the wave theory of
light in doubt, although not everything was resolved. When it was verified
without any room for doubt that light was capable of traveling in a vacuum,
the wave theory confronted a serious challenge: it was known that wave motion
consists in the transmission of energy through a medium. There can be no water
waves without water, nor sound without air or some other medium to transmit
it; how then could light transmit itself without relying upon a physical
medium to transport it?
The proofs in favor of the wave theory of light came to be so
overwhelming that there was no other remedy than to posit the existence of a
hypothetical medium which would transport it; a medium so subtle and so rapid
that it even filled a vacuum. They called such a medium the ether, and,
despite that it was practically impossible to detect, no one put its existence
under doubt; or almost no one, because Albert Einstein did not like, as we
have previously said, that the existence of something be accepted that could
not be experimentally checked.
In 1905 Albert Einstein, who at that time peacefully labored in the
Patent Office of Bern, Switzerland, knew very well of the work that the German
physicist Philipp Lenard had done some years before and which brought him to
discover the phenomenon that today is known as the "photoelectric effect."
Lenard found that on causing the incidence of a monochromatic light beam of
high frequency upon the surface of a metal, a certain quantity of electrons
detached from it, as a consequence of the luminous energy they absorbed.
Logic told Lenard that if the luminous source came nearer to the metal,
the electrons would absorb more energy and, to that extent, would be separated
from the surface with more rapidity. However, reality showed something else:
upon bringing the light source closer to the plate, there emerged more
detached electrons from the metal, but with the same velocity. The
experimenter was unable to explain this phenomenon; nor could he do so with
the experiment he subsequently performed: he held the luminous source in a
fixed place and what he now changed was the frequency of the light; he then
discovered that, the greater this was, the electrons emerged from the plate
with a greater velocity. The wave theory of light, which he considered as a
continuum of energy traveling through the ether, could not satisfactorily
explain those phenomena.
Lenard's work slept the sleep of the just throughout various years, until
the youth employed by the Bern Patent Office was able to unveil the mystery.
And if light, Einstein asked, behaves in a manner similar to the energy of
radiant heat, as the German physicist Max Planck demonstrated five years
previously, does it not transmit in continuous form but in small "packets" or
quanta? Considering light thusly, that is, as a flow of corpuscles that carry
a certain quantity of energy which is, in turn, proportional to the frequency,
the enigma of the photoelectric effect was immediately resolved: upon those
corpuscles of light--or photons, as they would later be called--colliding
against the surface of the metal, the velocity at which they are
detached from it will depend on the energy of the photons, that is, upon their
frequency, while the quantity of electrons that emerge from the
plate will depend on the quantity of corpuscles that collide with it,
the latter increasing or diminishing depending on the nearness or distance of
the source of light to the metal. To confirm his brilliant hypothesis,
Einstein demonstrated that the energy associated with each photon is
proportional to the frequency using the same constant that Planck had
discovered for quanta.
From then on the ether was transferred to the books of history and
curiosities of science: no one could simply demonstrate its existence because
it did not exist. Again the German genius had managed to make physics more
simple.
Although the price that he had to pay was high: the explanation of the
photoelectric effect represented one of the portals of the quantum theory; a
theory that, as we have mentioned, Einstein never liked, for it carries
uncertainty in its core.
In the decade of the 1830's, also now thanks to Einstein--who rescued
from oblivion the acceptance of the doctoral thesis of a complete unknown--the
Prince Luis de Broglie (the birthplace of Fresnel, with whom this story began)
wrote that light manifests properties of particles or of waves according
to the phenomenon to which it is subject, while with matter something more
occurs: the material particles (such as the electron) in certain situations
and ranges, can behave like a wave. But that is a different story.
9 April 200515. A quantum leap
FOR THE THIRD or fourth time I have read the word quantum in the one of
those speeches full of imprecision and chimerical promises which so appeal to
our president Fox. Without going further, this past Friday, in a meeting with
the country's district attorneys, which dealt, supposedly, with the prickly
matter of security, the neo-philosopher of San Cristóbal said, amidst
other nonsense, "The quantum growth of security in Mexico is about to
take off..." so as to conclude, later, referring to the indicators of
security: "However good it is, it is not enough. We must make a quantum
leap and we must do it quickly!"
Like many others, the word quantum, as well as quanta,
derived from the Latin quantum, entered into the lexicon of our speech
through science: towards the end of the 19th century the German physicist Max
Planck surprised the scientific community of his age by propounding a bold
explanation for the until then incomprehensible phenomenon of radiation from a
darkened body. Planck postulated that the only way to explain the spectrum of
emission that is observed from a superheated body was to suppose that the
energy which said body absorbs and emits comes to it and is emitted in a
discrete and discontinuous form. He called this type of capsule in which
energy is transmitted a "quantum," from Latin for "packet" (translated as
"quantum" or "quanta"). Likewise, he proposed that the energy which each
quantum possessed was an exact multiple of a universal constant that he named
h, and which today is known as Planck's constant. This proposition
caused an enormous commotion in theoretical physics. Until then no one had
doubted the notion that energy is transmitted in a continuous manner, like a
wave in the sea, a like a sound through the air, and upon this fundamental
notion physics was structured, which today we call classical. In fact, Planck
himself did not view his discovery with much enthusiasm; he was convinced that
its hypothesis was simply an approximation to reality and that, sooner rather
than later, a new theory based on the continuity of energy would explain the
phenomenon of the black body.
But it was not so. During the first decades of the 20th century, Planck's
constant occupied an important place in the mathematical formulation of
physics and quantum mechanics became the inevitable choice for understanding
the microworld of the atom.
What happens is that the value of that constant is so incredibly small
(6.62% of 10**-27 ergs per second) that one can explain the apparent
contradiction between the continuity and discontinuity of energy: for the
dimensions to which we are accustomed, these packets of energy are so minimal
that even with the most sophisticated instruments we see their energy as a
continuous flux. Something else happens when we approach the atomic and
subatomic world; there the dimensions are sufficiently small so that the
discontinuity of energy not only will be evident, but it will be the only way
to explain the observed phenomena.
Thus then, classical physics continued to rule in the world of our own
dimensions (it is not necessary to accede to quantum mechanics to describe,
for example, the motion of a ship or a plane) while quantum mechanics seated
its reality in the world of the infinitely small.
So does Fox really know to what he refers when he speaks of the
quantum leap that we must make against insecurity? In fact, this has
been the problem: the government of change has made innumerable
quantum leaps toward the well-being of the populace. That is,
dixit Planck, it has not advanced an iota.
(NOTE: In the Oxford dictionary there appears as one of the meanings of
quantum leap, "a sudden large increase or advance," which confirms that
our first dignitary, when he thinks, does so in English.)
30 April 200516. Enrico Fermi (1901-1954)
IN THE University of Chicago there is a commemorative plaque that says: "In
this place, on the 2nd of December of 1942, man produced the first self-
sustaining chain reaction and, with that, initiated the controlled use of
nuclear energy."
The plaque refers to the date in which, for the first time in history, an
atomic pile or nuclear reaction was put into operation. The director, and one
could say also the soul of the project was the Italian Enrico Fermi, perhaps
the greatest Italian scientist since Galileo.
Born in Rome the 28th of September in 1901, from a very young age he gave
indications of having an exceptional intelligence and a clear vocation for the
physico-mathematical sciences. At the age of 21 he obtained his doctorate at
the University of Pisa and continued his studies of theoretical physics in
Germany under the tutelage of Max Born. By 1930, at only age 29, he already
was a prestigious physicist renowned in the highest circles of the creators
of this science. In that very year he departed from theoretical physics (in
which he had made very important contributions, especially to the study of
gases) to dedicate himself to experimentation. That is where his genius shone
most brightly.
The discovery of the neutron in 1932 on the part of the English scientist
James Chadwick literally presented Fermi with a most powerful weapon for
studying the intimate nature of matter. The neutron, by not having an
electrical charge and thereby not being repelled by the atomic nuclei that
always carried a positive charge, turned out to be a penetrating projectile
which was capable of striking the heart of the atom and, thereby, provoking
its disintegration.
More than 40 elements were subject to bombardment by neutrons in the
laboratory of Enrico Fermi at the University of Rome, and from that he
obtained spectacular results: he managed to synthesize various elements
beginning with others (the ancient dream of the alchemists) and even to create
the first synthetic element that would occupy place 93 in the periodic table,
next to uranium (from which it was born) and which is the heaviest element
that exists in nature. The pioneering work of Fermi has allowed the
experimental physicists to create more than 15 new elements (indeed the number
100 carries the name, fermium, in honor of the Italian physicist) that
only exist because man exists.
The terrible fascism that darkened Europe during the fourth decade of the
20th century caught him in full creative work. Fermi, who absolutely did not
sympathize with the despotic regime of Mussolini (that, additionally,
seriously threatened Laura, his wife, for she was of Jewish origin), tried to
abandon Italy beginning in 1936. The authorities of that country did
everything possible to impede this exodus, and perhaps he might have stayed
there except for a fortuitous intervention (supported however by his enormous
talent): in 1938 he was awarded the Nobel Prize in Physics. Fermi took
advantage of the trip to Sweden for the purpose of receiving the prize to
voyage from there to the United States accompanied by his family.
In this country he immediately resumed his labors; now directed towards
obtaining a chain reaction in the nucleus of the plutonium atom. As we saw, in
1942 he was able to place into motion an apparatus that generated controlled
thermal energy derived from the disintegration of the atomic nucleus. Those
works were a key piece in the complex gears of the Manhattan project, which
culminated in the sinister explosion of the atomic bomb over Hiroshima.
Fermi, who always was opposed to the military use of nuclear energy, used
his final years to fight against the proliferation of atomic weapons. Defeated
by cancer, like many other scientists who devoted themselves to the study of
radioactivity, he died November 28th, 1954. He was only 53 years old.
23 November 200417. "Like a Christmas tree"
THE MORNING of the 6th of August of 1945, after a sudden flash of light, so
intense that it seemed a piece of the sun, a giant cloud of smoke and dust
arose over the Japanese city of Hiroshima. The column that, against everything
that might be supposed, did not take the shape of a mushroom, rose up, twisted
and black, several miles to acquire in the end a capricious figure that
resembled a Christmas tree, as would later be recounted by one of the crew
members of the airplane that transported the singular gift that the United
States government presented to the empire of the rising sun.
With that, the vast universal history of infamy opened a new chapter; one
of the most atrocious, to be sure: 75,000 persons lost their lives as a direct
consequence of the explosion. Hundreds of thousands more would later die from
the consequences of the radioactivity freed by the uranium bomb. Three days
later, the 9th of August of 1945, other thousands of unfortunate inhabitants of
Nagasaki, a small commercial port to the southeast of Hiroshima, came to know
the enormous destructive force that plutonium too is capable of liberating, the
material of which the second bomb that fell on the islands of Japan is made.
The tragedy that these two martyred cities suffered already 60 years ago
was the culmination of a sinister process that had its roots 15 years
previously, in the laboratories of experimental physics and in the classrooms
of theoretical physics of Germany, Italy, France, England, and the United
States, and in the confluence of politics and economics. Few times in the
history of humanity has the mix of science, politics and the economy been so,
literally, explosive.
From the viewpoint of the first, everything began the 17th of February of
1932 when the Englishman James Chadwick, disciple of the legendary Ernest
Rutherford, discoverer of the atomic nucleus, sent a letter to the magazine
Nature in which he announced the discovery of the neutron. Until then
the students of the atom could only utilize alpha particles and protons as
projectiles to bombard the atomic nuclei and thereby provoke nuclear reactions
that resulted in the transmutation of the elements, that old dream of the
alchemists. In fact, it was Rutherford himself who succeeded, in 1919, for the
first time in history, in transforming one element into another: upon
bombarding an atom of nitrogen with alpha particles, he obtained from it an
atom of oxygen and one of hydrogen.
But the alpha particles and the protons have the disadvantage of
possessing a positive electrical charge, the same charge that the nuclei of the
atoms have. The strong electrical repulsion that particles with the same charge
manifest makes it very difficult for there to be a collision. Many alpha
particles are required, at very high velocity, to obtain a few collisions. On
the other hand, the particle discovered by Chadwick has the enormous advantage
of being electrically neutral, which is what notably enhanced the possibility
that the atomic nuclei would collide.
Armed with this new projectile, German, Italian, French, English, and
American investigators happily joined the task of bombarding myriad substances
and elements with neutrons. This frenetic activity soon yielded fruit: in 1934
a team of Italian scientists, headed by Enrico Fermi, announced the ephemeral
existence of the transuranic elements; at the beginning of 1939 the Germans
Otto Hahn and Fritz Strassmann introduced the scientific world to nuclear
fission.
Six years before, in 1933, when a year had still not elapsed since
Chadwick's discovery, Adolf Hitler was named chancellor of Germany, and the
second front was opened which led to the holocaust of Hiroshima and Nagasaki...
In 1987 the chemist Horacio García Fernández published, under
the imprints of the National Polytechnic Institute and Alhambra Mexicana, the
book entitled The bomb and its men. In it he provides a brief, yet well
documented, account of the complex process that led to the fabrication of the
atomic bomb, placing special emphasis upon the handful of men of science who
participated in the scientific and technological aspect of that process.
Among them he highlights, in my understanding, the Hungarian Leo Szilard,
a true modern Prometheus who, like the Greek giant, brought fire to mankind
and, like him, paid a high cost for his audacity; although in his case it was
not an eagle that was assigned to torment him, but instead the gang of
politicians, military and businessmen who made the worst of his predictions
come true.
While still a youth he abandoned Hungary disgusted by the dictatorship of
Béla Kun. But he chose a bad country in which to start his life again: "He
emigrated to Berlin, where he matriculated in the university and began to study
theoretical physics under the tutelage of masters such as Einstein, Von Laue
and Max Planck." In 1932, with Adolf Hitler as the resplendent chancellor of
Germany, Szilard, who was of Jewish descent, already understood that in that
country there would be no place for him. He then emigrated to England and
ultimately found shelter in the United States.
Gifted with a fine intuition, as well as having a solid scientific
foundation, he may have been the first to take note of the enormous potential
that surrounded the discovery of the neutron. Long before Hahn and Strassman
announced the phenomenon of nuclear fission to the world, Szilard had already
pondered it, and had concluded that the energy which could be liberated from
such a process would be of incalculable magnitude.
When, in 1939, nuclear fission was a reality, Szilard, now located in the
United States, tried to convince his colleagues, as much in that country as
in Europe, that they make a sort of pact of silence: the phenomena that they
were observing in the atomic nucleus should remain a secret; something like
what the Pythagoreans did 2,500 years previously, fearful that their arcane
wisdom would fall into evil minds. But no paid attention to him; the
competition for the Nobel Prize was too attractive for withholding secrets.
Seeing that it was impossible to deter the pathway that he knew would lead
to a terrifying bomb, he changed his strategy. He then dedicated all his
efforts to ensure that that sinister artifact would be implemented in the
United States before in Germany. He knew perfectly well that capability of the
physicists who had remained in that country; he also knew that the German
invasion of Norway guaranteed Hitler access to the heavy water that abounds in
that Nordic nation and which is indispensable for obtaining enriched uranium,
and had no doubt that the Nazi tyranny would do everything that might be
necessary to fabricate an atomic bomb.
Leo Szilard, together with his countryman Edward Teller, who some years
later became the sadly celebrated father of the North American H-bomb, began a
campaign of persuasion among the U.S. political and military classes concerning
the feasibility of creating a bellicose device that could incline the balance
of the war in their favor with one sole apparition. Furthermore, he wished to
convince them that, if they did not make one first, the Germans indubitably
would finally manufacture the bomb.
It was not a simple task: the proverbial ignorance by the politicians of
everything that is not falsified statistics or late-night lessons from
Machiavelli turned out to be a most difficult barrier to surmount. The great
academic prestige that Szilard and Teller enjoyed among their colleagues meant
nothing to the men of the Pentagon and the White House. They had, then, to seek
an unimpeachable authority to convince them. Who other if not Einstein?
It is told that Leo Szilard and Edward Teller convinced Albert Einstein to
write a missive to President Roosevelt with the goal of explaining the advances
to him that had been achieved in the investigation of the atomic nucleus and
the possibility, deriving from them, of creating an explosive device of immense
potency.
Einstein, although not too sure that it would be possible to make such a
bomb, acceded and placed his signature on a letter dated the 2nd of August of
1939 yet drafted almost in its totality by his colleagues. Perhaps only the
following fragment may have been written by Einstein himself, and it
illustrates very well the underestimation by the learned man of the energy that
could be unleashed by the atom:
A single bomb of this type, carried by boat and exploded in a port, might
very well destroy the whole port together with some of the surrounding
territory. Nevertheless, such bombs might very well prove to be too heavy
for transportation by air.
Today we know that the bomb could indeed be transported by plane and that
it was capable of destroying not only a port, but an entire city.
In the letter he also informed the president how scarce uranium was (the
prime fundamental material for a chain reaction) in the United States, while it
abounded in Czechoslovakia (by then under the control of Germany) and in the
Belgian Congo. He had, then, to take measures to guarantee access to the
mineral in its African source and thus to take the lead over the Germans in the
atomic race.
The letter arrived on Roosevelt's desk two months later and, contrary to
what might be thought, did not have much impact upon the spirits of the
president: he authorized a grant of 6,000 dollars for nuclear investigation and
filed away the proposition for more than two years.
Towards the end of 1941, when the United States was on the verge of
involving itself in the world conflict, the project of creating an atomic
weapon gained vitality. Einstein now had nothing to do with that; it was more a
complex interlocking between military, scientific and industrial interests that
resulted in the authorization, on the part of President Roosevelt, of the
ultra-secret Manhattan Project, whose aim was to fabricate an atomic bomb
before Germany, for which an estimated billion dollars were invested.
In reality the project cost 3.5 billion dollars and involved thousands of
persons and the most powerful companies in the United States.
At last, on July 16th of 1945, when the Nazis had already surrendered two
months ago, the first atomic bomb was exploded in a southern desert in America.
Szilard, frightened before the might of the weapon that he had created,
and convinced of its uselessness now that proud Germany was defeated, made an
even greater effort than he had done before to persuade the powerful of the
possibility of making the bomb, but now to dissuade them of using it, yet Japan
continued at war with the United States and it was no secret that Harry Truman
(the new U.S. president following the demise of Roosevelt) pondered the
possibility of using the weapon against the Japanese empire.
In this case the efforts of Leo Szilard were useless: the fate of Japan
was cast. It was not, as has been argued interminably, the need to save lives
of North American soldiers, nor the urge to intimidate a victorious Soviet
Union that drove Truman to take that terrible decision. In fact the fundamental
reason why the bombs illuminated the skies of Japan during the awful summer of
1945 was the investment of 3,500 million dollars that had been made and which,
the market dictates, needed to be justified.
6 August 200518. Archimedes of Syracuse (c. 287-212 BC)
IT IS SAID that science is a human activity which is nourished by curiosity,
ingenuity and intelligence. The great Archimedes, perhaps the premier physicist
in the modern sense of the word, completely represented these attributes.
His intelligence led him during his youth to abandon the island that saw
his birth to study science in Alexandria, the mythical cradle of wisdom in
those times. With a solid repertoire of knowledge to his account he returned to
Syracuse to share it with his compatriots. It is said that his fame as a wise
and devilishly ingenious man began when he could solve a problem posed by his
parent, King Heron of Syracuse: the sovereign had directed a goldsmith to make
a crown for him with of certain quantity of solid gold which he gave him for
that purpose. The king wanted to know whether the artisan had deceived him,
that is, whether or not he had added copper or silver to the gold he had
received to create the diadem. But the adornment was so lovely that Heron did
not want to destroy it to resolve the enigma. How could one know if the crown
was pure gold or a mixture without putting a sample of it into the crucible? A
truly complicated problem.
Archimedes spent several days turning over the matter without finding a
response. To the delight of the historians, and of the novelists, the solution
to the riddle lit up his mind just when he found himself submerged in a
bathtub, perhaps washing or maybe relaxing his body. He observed that, as he
entered the water, a certain quantity of it overflowed the bathtub and
concluded that that amount corresponded to the volume of his own body. He then
understood that it was enough to submerge the crown in a receptacle full of
water, measure the quantity that is displaced and compare it to the amount
displaced when a gold bar which weighs the same as the crown is submerged: if
both replace the same amount of water, then they both are of gold; in the
contrary case, the sly goldsmith should tremble for his life. Happy with his
discovery, the scientist began to run naked through the palace
shouting: Eureka, eureka! (I found it!) like one possessed. Thus, the
genius of Archimedes, in addition to unveiling one of the fundamental
principles of hydrostatics, introduced the most precise and correct word to
express any discovery.
Yet that was only the beginning. The works of this Greek in the fields of
theoretical and practical physics and in mathematics are as extensive as they
are varied. To him we owe a methodology sufficiently precise to calculate the
value of pi, for example, as also the use of the endless screw, the explanation
of the principle of the lever, and the use of concave and convex mirrors.
Even though he did not highly estimate his abilities in practical physics,
or technology, as we call it today, the ingenious artifacts that he designed
during the last months of his life for a long time held off the overwhelming
Roman legions who sacked Syracuse. The populace of the ancient Greek city
viewed with great pride the portentous talent of one of their sons who held the
brute force of the nascent empire in check.
Another legend, even more moving than the eureka one, recounts that
he met his end while completely absorbed in his scientific meditations. A Roman
legionnaire who patrolled the recently conquered Sicilian city came across an
elderly man who drew strange marks in the sand with a stick. The soldier
ordered him to get moving and leave the premises. The old man responded not to
bother him. The military man, with that abominable arrogance inherited by the
imperial soldiers of today, cut short the life of the genius with slice of the
sword.
They say that history repeats itself: how many wise men are losing their
lives in the streets of Baghdad?
18 January 200519. A Christmas gift
BY THE end of August, Hannah Ayscough now had no doubt that she was pregnant.
Three months without a period, added to the nausea and dizziness that swept
over her during the last weeks and the pain that she sometimes felt in her
breasts, which certainly were becoming harder and more swollen all the time,
could mean nothing else. She decided to tell her husband. And she would do it
that same night, when he would return home after being a month in London.
She did so during dinner. It cost her effort, was as if she had to reveal
something sinful or improper. She began talking about the weather while they
ate their soup and, when they arrived at dessert, after having told him
everything that had happened in the house and in the town during the days she
had not seen him, she still had not dared to give him the only truly important
news. "Why is it so difficult for me to tell him," she thought. In truth, she
knew the answer: her husband was the most methodical and orderly person she had
ever known. And it was not currently in his plans to have children.
He had said it very clearly the first night that they slept together:
We should be very attentive of your lunar cycles, Hannah. For you to
become pregnant now would be inopportune. It is first necessary to
complete the arrangements for the house in Woolstorphe, and that is going
to take at least a year; furthermore, my position in the market is still
not sufficiently solid. To raise a child is a very serious matter, my
dear.
Thus, scrupulously attending to the dictates of nature, which he
considered infallible, he only joined his wife's body the three days before and
the three days after her period. Perhaps this passed through Hannah's mind, or
at least the phrase she chose to begin her confession so indicates: "Dear
friend, nature has failed us..."
The expression that was written on the face of her husband upon hearing
the unexpected news disconcerted Hannah: he did not show anger, nor opposition,
nor annoyance, nor even surprise, but rather there came to his eyes, though he
tried to conceal it with an affable and magnanimous smile, a profound sadness.
"Say nothing, if God has willed it thus, we shall comply with his designs. I
shall speed up the arrangements for the house as much as possible."
Two months later, during a night in October of that fateful year 1642,
Hannah understood the gesture of sadness that had shadowed the face of her
companion when he learned that they would have a child. Seeing that same face
now serene, pale, with the unmistakable aureole of mortality etched in his
pupils, the woman discovered that her husband had for a long time been
conscious that he was dying.
The impact of that unexpected demise almost costs Hannah the fruit that
she carried inside her: a copious bleeding announced the imminent miscarriage.
But that did not occur. After the crisis, the fetus continued on in the
maternal womb. The family doctor, considerably pessimistic about the
possibility the woman had to save her offspring, prescribed that she remain in
bed until the delivery arrived.
It was an icy and extremely sad autumn. Hannah, with her sight almost
always fixed on the great oak which shaded the garden of the home, begged God
for the life of her child, and asked him too to make the time pass more
quickly. The supreme being paid her little heed, at least regarding the second
wish, because never in the existence of that poor woman had the days flowed
more sluggishly.
Winter arrived placing a mantel of cold and snow upon the already frigid
autumn. Such a brutal winter had not been recorded since the days of the
Republic. It was the Christmas holidays when Hannah noted in the diary that she
had begun when she was assigned to her bed: "December 23rd. Today completes,
according to my calculations, seven months. Good Lord! Still two to go. Can I
withstand them?"
On Christmas day, Hannah remained alone in the house. Her mother and her
uncle went to visit the Hardins to give them their greetings and some gifts.
With her vision set on the oak tree, which now was like a giant snowman, the
woman felt a sharp pain in her belly. It only began to dissipate, when she felt
one even more intense. There was not a third; what there was instead was a tiny
creature who struggled weakly in a puddle of blood. The cry the woman uttered
reached the ears of Anne, the servant, who in an instant was at the side of her
mistress, helped her to cut the boy's umbilical cord, wiping him with a moist
sponge and wrapping him in some clean sheets. Hannah contemplated the infant:
small and squalid like a rat out of water and, worst of all, hardly moving and
breathing with apparent difficulty. She did not want to see him so as not to
bond with him, for it was evident that he would not be long in dying. She asked
Anne to place him in a basket and take him to the house next door. Then she
requested her to go to church to find a pastor so as to give extreme unction to
her son.
"What a sorrowful gift you have given me, Father! Could you not wait two
months so that he would be capable of surviving? Why did you want to take the
right to live from him? Have I been so wicked that I merit this punishment? Or
perhaps it is that You..." and interrupted her reflection, for she knew that
she was one step from blasphemy.
The minister delayed two hours in arriving. "The deceased is in the house
beside here, father," Hannah said as soon as she saw him. The man returned with
a small bundle in his arms. "No one is dead, Hannah, the creature is alive.
What we must do is baptize him immediately, for in truth I do not believe he
will last long among us. What would you like to name him?" Fearful of the hope
that was awakening in her chest, Hannah hesitated. No, she was not going to
name him Jonathan, as had been decided with her husband. That hope, which grew
moment by moment, and which told her that the boy no only would be saved, but
that he was going to be a great man, perhaps an immortal man if he could now
manage to survive; it was that hope that inspired her to name him after the son
of Sarah who also was born by a miracle. "He will be named Isaac, father," she
said at last. And the gift was called Isaac that the Lord gave to Mrs. Newton
the Christmas of 1642.
24 December 200420. Solitude and the scientist
THE ENGLISH physicist and chemist Henry Cavendish is the most distinct example
of a man solitary and devoted to this work, or even better, his passion.
The grandson of the duke of Devonshire on his father's side and of the
duke of Kent on his mother's, Cavendish was born in Nice the 10th of October in
1731. He arrived in the world quite far from his ancestral lands due to the
fact that his mother was ill and the doctors had prescribed the more benign
Mediterranean climate for her. Even so, the woman died and left her second
offspring at two years of age an orphan.
Back in England, the aristocratic youth studied first at a school in
Hackney and later at Peterhouse College, in Cambridge, where he finished his
studies in 1753 without having obtained his title. Here is where the singular
personality of this individual begins to be revealed: despite his brilliant
intelligence and his undeniable dedication to study, he could not obtain the
degree offered by his institution simply and clearly because he was incapable
of confronting his advisers in the oral examination with which he needed to
defend his thesis. Perhaps because of his hesitant way of speaking, and surely
due to his taciturn and reserved temperament, Cavendish demonstrated a timidity
that bordered on muteness. He spoke with very few people and as infrequently as
possible. Even his brother he only saw for a few minutes each year.
The only thing that connected him to the world was science. For love of
it, he was able to spend some years in Paris studying and, upon returning to
London, to participate every Tuesday in the meetings organized by the Royal
Society of Sciences. The rest of the time he passed locked in his library and
study.
At 40 years of age he inherited an ample fortune which transformed him
into one of the richest men in England. In that era, perhaps convinced that
there was little left for him to learn abroad, he shut himself in his library
and study (which was five miles from the Cavendish home) and practically did
not emerge from there until 40 years later, when his mortal remains were taken
to Cathedral of All Saints, in Derby, on February 24th of 1810.
He never married and it is unknown whether he may have had an amorous
relation with a member of one sex or another. He was so unsociable that every
morning he left the instructions for the help of what he wanted for dinner that
day on the dining room table and had a prohibition for his maids, under threat
of firing, to appear before his sight while he ate.
Even so, one could not affirm that Cavendish had been a misanthrope. At
least what he did when he inherited his fortune was not misanthropic: he
constructed a physics laboratory and a library for those who shared his
vocation and talent but not his resources, and financed their work. The
Cavendish Laboratory for physics still exists in England, which has seen
various Nobel laureates pass through its premises.
Dedicated to his science like no else I have heard of, the volume of
Cavendish's work is astonishing, as much for the breadth of its fields of
physics as for the chemistry it borders upon. But the good Englishman was also
antisocial regarding glory: he deigned to publish very few of his studies and
when he did so it was because he was convinced that he had not included the
smallest error in his data. More than 50 years had to pass after his demise for
the scientific community to uncover the grandeur of this strange personage.
In 1798, Henry Cavendish accomplished one of the most ingenious and
complicated experiments known to the history of physics, which permitted him to
be the first man who could measure, with sufficient precision and certainty,
the mass of the Earth and, thereby, that of the sun, the moon, and the planets
known in his day.
How could one person enclosed in a laboratory of no more than 100 square
yards measure the mass of our planet and, even more, that of the gigantic star
whose light is responsible for the miracle of life on earth?
As we have seen, 100 years previously, a countryman of Cavendish, the
great Isaac Newton, posited the theoretical basis that inspired our hermit to
perform his brilliant experiment. In 1689 Newton published his Philosophiae
Naturalis Principia Mathematica, a monumental work that established the
principles of modern physics. One of the most important conclusions to which
the learned Englishman arrived after revising the movement and character of the
laws that direct it, is that bodies are attracted with a force that is
proportional to their masses and inversely proportional to the square of the
distance that separates them. He expressed this relation through the equation
F=G m1m2/r**2, where F is the gravitational force, m1 and m2 the bodies' masses
and G a proportionality constant he named the universal gravitational
constant. Newton could not calculate the value of that constant, since the
force of gravitation is very weak and to measure it one requires, either a very
large mass (such as that of the earth or the moon which, of course, Newton did
not know) or an incredibly sensitive instrument capable of measuring the
minuscule gravitational force produced by a smaller body.
The patient Cavendish was able to design and implement that instrument. He
suspended a rod from a fine wire at its center and at each end of that rod
placed a small lead ball. Upon applying a slight force to the balls, the rod
turned and the wire twisted.
He then brought two enormous lead spheres close to the rod and the
gravitational force between the different spheres caused the wire to twist. He
measured the torsion in the wire and with that, the force applied to it, and
also measured the mass of the spheres and the distance separating them. With
these data, he could find the constant G from Newton's equation. Upon knowing
the value of this constant, the equation could be utilized to calculate the
mass of any body. The earth, for example, according to Cavendish's
calculations, should have a mass of 6,600 trillion tons and a density five and
a half times greater than that of water. With spheres that could be weighed in
grams and kilograms, he calculated magnitudes of trillions and quadrillions of
tons. A true portent.
Even though it is simple to describe, Cavendish's experiment is devilishly
complicated to realize and requires such fine precision in the measurements
that it is almost inconceivable that our cloistered character accomplished it
with an instrument practically made with his own hands.
The experiment by Cavendish illustrates like few others the enormous
potential of scientific thought. After more than two centuries since he
performed it, the mental acuity of the man who designed and executed it still
surprises us, just as does the portentous intelligence of he who
conceptualized the gravitational force.
10 September 2005ASTRONOMY21. Poor Mars!
IT COULD BE said that since a few months ago, namely since its orbit approached
Earth as it had not done for many years and thus allowed us to contemplate it
in all its rosy splendor, the planet Mars has become fashionable. Indeed today
there are two robots upon its surface. One, the European Beagle-2,
unfortunately has remained mute since it arrived there, last Christmas eve. The
other robot, the American Spirit, succeeded in emerging from the shell
that protected it and has begun a hesitant traversal over the craggy surface of
that planet, at each moment sending splendid digital photographs of everything
its sharp eye considers of interest, and likewise with its sharp instruments
pinching those rock samples that could hold the mysteries of life in their
crystalline structure.
Elsewhere, a European probe, the Mars Express, in Martian orbit
since the past 25th of December, also has begun to send lovely photographs of
the valleys of Mars which it manages to snap from an altitude of 175 miles
where it is found orbiting the red planet.
Ever since the telescopes of the end of the 19th century discovered some
strange formations upon the surface of Mars reminiscent of earthly canals, the
idea that that planet might harbor life, or might have had it during remote
times, has awakened many persons' imaginations, just as many more are
interested to know if we are not alone in the universe.
As for the first, the imagination, myriad men of letters have endeavored
to populate the red planet. I only refer to two, perhaps the most conspicuous:
H. G. Wells and Ray Bradbury. The vision that each of these authors has of the
hypothetical inhabitants of Mars could not be more contradictory, although at
the same time, at least from my point of view, in their synthesis they capture
what it is to be a human being.
In his War of the worlds, Wells describes some highly technocratic
Martians, aggressive, with an iron military spirit of conquest. A vision very
similar to that an Iraqi shepherd might have had when he was the hordes of
U.S. soldiers invading his territories.
On the other hand, the inhabitants of Mars that Bradbury describes for us
in his Martian chronicles are sweet, peaceful beings, with handsome
golden eyes which can see, in the world of the spirit, much further beyond than
most earthlings, as might belong to a resident of a secluded monastery on the
skirts of the Himalayas.
And as for the second, the obsessive search for any indicator of life,
however slight it might be, that might exist on that planet, it is always
paradoxical that so much intelligence and of many resources are spent on that
desire, in that effort to find on Mars if only a crystallized sample of a
protein or an amino acid, while here, on the old Earth, where billions of live
beings are found, we condemn a live species to extinction every day.
It is tempting to think, playing with astrology, that this Martian fashion
may be a consequence of that Mars is the god of war of the Greek Olympus, given
that the somber times in which we live are, indubitably, ruled by that
ferocious deity.
To top it off, the warrior George W. Bush, the high priest of Ares the god
of war, were he to know that oil exists beneath the Martian surface, would
threaten to colonize the red planet beginning in 2030, in what would come to be
a fatal inversion of the Orwellian war of the worlds. Poor Mars!
29 January 200422. WIMP or MACHO?
FOR A long time the astronomers suspected that the objects we can observe
in space (stars, cosmic dust and other celestial bodies such as planets,
asteroids, comets, etc.) did not represent the totality of the mass of the
universe. There must exist, they thought, an important percentage of that mass
which is constituted from some form of material that cannot be detected by a
telescope (they called it "dark matter").
At the beginning of 1993 a note was published that caused great
expectation among astrophysicists and cosmologists because it confirmed their
suspicions, although one could say it did so in excess: observable matter
represents less than ten percent of the total mass of the universe. They
arrived at this conclusion after a long and meticulous study performed using
the satellite-observatory of Rosat x-rays to observe the distribution and the
temperature of some intergalactic clouds that are found in the small galaxy
group known as NG 2300. The information that this satellite received, together
with the assumption that the clouds are attracted by gravity to remain in the
vicinity of the star group, permitted Rosat's scientific team to calculate the
mass of NG 2300. They concluded that the visible matter in the star group only
represented four percent of its total mass (with an upper limit greater by 15
percent).
Thus, we know now that 90 percent of the universe is comprised of dark
matter; said differently: 90 percent of the mass of the universe is invisible
even though we have it practically before our noses.
At least that is what sustains the group of cosmologists who defend the
hypothesis, grounded in the theory of the Big Bang, from which almost
all the dark matter is formed, from a sea of hitherto undetected elementary
particles with exotic properties and strange names (axons, magnetic monopoles,
Weakly Interacting Massive Particles or WIMP) that fill space. All
descend from the Big Bang; the visible matter is a minimal part of that which,
initially through complex gravitational interactions, and later nuclear, formed
the celestial bodies.
Science requires that hypotheses be demonstrated through experimental
observation. That is done by the defenders of the hypothesis of "cold" dark
matter, as the elementary particles that surround us are called: in a tunnel at
the High Energy Physics Laboratory at Stanford University, a giant germanium
and silicon crystal detector has been installed, which is sensitive to the
ionization that is produced when an atomic nucleus collides with a WIMP or some
other obscure dark matter particle. It took a long time and much patience for
enough collisions to be observed so as to confirm the hypothesis.
Another group of cosmologists holds a diametrically opposed hypothesis to
explain the abundance of dark matter in the universe. They think that almost
all of the undetected mass is found in objects resembling stars or planets
which are found in the halo that surrounds the galaxies and which, for various
reasons, do not emit sufficient light for them to be identified. Curiously,
these objects are called MACHO (for their initials in English: Massive
Compact Halo Objects). Another possibility is that the mass is to be found
in extinguished stars and, of course, in black holes, objects so formidably
massive that they do not allow even light to escape from their gravitational
field.
As opposed to that which happens in many parts of the world, it has not
been at all easy for the scientists to find the MACHO. Taking advantage of
potent modern telescopes (those that are found on the surface of our planet,
as well as those that orbit around her), during the past decade three teams of
scientists began an intensive search for the MACHO, employing a method
suggested for the first time by the astrophysicist Bohdan Paczynski, of
Princeton University. The technique involves study of the systematic variations
in the intensities of light from millions of distant starts over the length of
several years. The technique's principle consists in that when a MACHO crosses
the line of light emitted from a faraway star, the gravitational field of the
dark object would focus the ray of light, like a sort of lens, such that the
terrestrial observers would see a momentary increase in the brightness of the
star.
Since then a good number of MACHO have been detected, yet not enough to
account for the enormous percentage of the mass of the universe that is
represented by dark matter.
The discussion, then, still has not been resolved, although everything
tends to suggest that both hypotheses are correct; that is, the universe's dark
matter very probably is comprised both of a sea of elementary particles that
uniformly swamp it (WIMP) and of very massive and compact objects that do
not emit visible light (MACHO).
What is fascinating in all this is to ask oneself why the scientists use
so many resources and so much grey matter to resolve the enigma of dark matter.
At first sight, it does not seem overly important to detect something that
persists being undetected and which, in the last instance, has little or no
effect upon our life on the planet. What demons interest us in knowing of what
dark matter is made, a pragmatist could ask, apparently with reason.
That makes me suspect that the pragmatists of scientific know-how are not
as progressive and humane as their apologists suggest. In fact, if before
initiating any attempt to know or understand anything we asked questions like
the previous, it is very probable that, to paraphrase Voltaire, "we would still
walk on four feet and live in the treetops."
Any cosmologist would tell us that if the nature of the dark matter of the
universe were understood, we would be much closer to understanding how and
when it was (or was not) created, and how and when it will disappear (if it is
to disappear). Perhaps knowing this does not resolve the thousand and one
deprivations confronting human life at the dawn of the third millennium. But
certainly, in the search for those answers, and above all in the questions we
in turn ask of the universe around us, we encounter what is most profound in
human nature.
Even though the road that we choose may be very long, it is effective in
any event: in knowledge of the universe, the result is knowing ourselves, as
the oracle of Delphi counseled, and maybe there are only "machos" in the halos
of the galaxies because on our planet there will be found nothing but true
human beings.
6 June 200423. Caduceus: the wand of Hermes
IT IS TOLD that Rome often suffered from scarcity during the turbulent times
that followed the expulsion of the Tarquinians. Various religious innovations
were tried then to propitiate the gods, and in 495 b.C. the Greek god Hermes
was introduced in that country, with the Italian name, Mercury. His temple on
Aventine Hill was converted into a sort of headquarters for commerce in grain
and for the marketeers (mercuriales) who traded it, although soon it
again became the object of the cult of merchants in general. Its annual
festival fell on the ides of May, the day to which his temple was dedicated,
perhaps to coincide with the festivities of Maia, the mother of the god. Her
statues were erected in the commercial section and the water of the sacred
fountain, near Porta Capena, was used by the merchants in a brilliant ritual
they performed on May 15th. In the Roman statues Mercury carried a caduceus, a
very rare element in Hellenic representations, yet which would come to be the
universal symbol of commerce.
Mercury, who like the furious Achilles also had winged feet, gave his name
to the smallest of the great planets, with a radius of only 1,500 miles, and is
found closest to the sun, at only 37.5 million miles from the surface of our
mother star.
But this small homonym of the god of the Roman marketeers may be the most
eccentric and enigmatic of the interior planets. It is indubitably eccentric,
for it has the most elliptical orbit of all its major brothers, with a
difference of almost 15 million miles between its perihelion and aphelion.
It is not easy to observe this planet, given that, by being between the
earth and the sun, it is visible only during the day. However there is no
shortage of those who have done so, and thoroughly. Distinguished among them is
the Italian Giovanni Schiaparelli, who after viewing the planet for nine years,
using for that purpose a refractive 18-inch telescope (an advanced instrument
in its day) concluded in 1889 that Mercury's orbit around the sun takes
approximately the same time as for it to rotate on its axis (between 88 and 116
days). That is, as occurs with the moon for the earth, Mercury always presents
the same face to the sun. To be precise, 37 percent of its surface never stops
facing the sun, while an equivalent amount is never reached by its rays.
The difference in temperatures between its illuminated side and its dark
side is abysmal: more than 430 degrees Centigrade for the one, less than 200 on
the other. This causes the surface of the planet, especially that which is
found between the luminous and dark zones, to be perpetually vibrating,
literally crackling as a consequence of the brutal temperature changes.
Therefore, despite being much smaller than the earth, it has a unique set of
precipices on its surface that rise several thousand yards and extend hundreds
of miles.
We know of the existence of those cliffs and of innumerable craters, like
those there are on the moon and on Mars, on the rocky surface of Mercury thanks
to the images that were sent to our planet from the Mariner 10 probe on
the 29th of March, 1974. These days, 30 years later, a new space probe, the
Messenger, will be launched towards Mercury. After covering five million
miles over seven years, it will enter orbit around the small planet. Perhaps
then many of the mysteries will be clarified that surround the planet of the
god of the caduceus.
3 August 200424. The war of progress
THERE IS NO doubt that the human being is a curious (in both senses of the
word) and contradictory beast. In the last 100 years, possessed by the demon of
progress, we have unleashed a merciless offensive against our mother planet,
and against ourselves. The role of this suicidal war could not be more
sobering: many more than 100 million homo sapiens have died at the hands
of other homo sapiens in some of the countless armed conflagrations that
desolated the past century; thousands of animal and vegetable species have
disappeared as a consequence of human activity; approximately a fifth part of
the planet's fertile lands have been desertified or transformed into sterile
urban blotches; the byproducts of our everyday know-how have contaminated sea,
land and air (which furthermore we have superheated and even perforated its
polar caps) and, the uncontrolled population growth of the human species has
converted our world into a collection of asphyxiating neighborhoods.
At this pace we soon will win this absurd war and obtain the merited prize
for our determined efforts: the extinction of the human race.
Yet before we said that we are contradictory: if indeed it is true that
the cult of progress we profess has led us to worship mortality and
destruction, it is not less certain that, at the same time, we feel a strong
sympathy and attraction to life. In no other way can we explain the intense
efforts made by men of science to discover forms of life in the most isolated
regions, whether it be in the depths of the ocean, in the ice of the poles, in
a scalding lake in the crater of a volcano, or even beyond the reach of our own
planet.
Many of us follow with enormous interest what is occurring with the
expeditions to Mars and we share the hope that, on its desolate surface, some
form of life will be found, even if it be a simple species of bacteria. With
what reverence and care we would treat that hypothetical inhabitant of the red
planet if some day it would fall into our hands! And we would do it, the same
ones who eliminated thousands of animal and vegetable species from the surface
of the earth.
Lately, even more fascinating would be to find some form of life in the
universe similar to ourselves. Although Isaac Asimov, in a thorough essay, has
demonstrated that at least as far as current science knows it is practically
impossible that we might make contact with an intelligent extraterrestrial
being, a great number of scientists, and with powerful resources at their
disposal, are immersed in that task. Logic tells us that life cannot exist on
the stars; hence it would be a planet that might host it. On the planets that
are near us it is improbable that intelligent life exists. One would have to
search for it, then, outside of our solar system.
Given the enormous dimensions of the universe and the relatively flimsy
instruments available to us for observing it, until only a few years ago many
scientists thought it would be impossible to observe a planet that was outside
our solar system. Nevertheless, just ten years ago, towards the end of 1994,
Alexander Wolszczan of Pennsylvania State University presented data that
confirmed the first evidence of at least two planets around the pulsar star
which carries the antiseptic name, PSR B1257. Since then more than 100 planets
have been found outside of our solar system, with one of them, discovered in
2001 and which orbits star 47 of Ursus Major, having dimensions and physical
characteristics very similar to those of Earth. Would there be life on it?
Shall we arrive there someday? If we win the war of progress, surely not.
1 October 200525. Little green men
EXCEPT FOR the legendary María the Jewess (inventor of the Maria's bath)
and the more earthly Maria and Irene Curie, no other woman comes quickly to my
mind when I consider names of famous scientists. It is not difficult to
explain this scarcity in light of the despotic patriarchal system that has
ruled in our world since the very origins of history. Luckily, beginning in the
second half of the 19th century the participation of women in all the realms of
human activity has multiplied exponentially. Not many years will have to pass
in order for the new histories of distinguished scientists and artists to be
much more equivalent in terms of their participation by gender.
Perhaps one of the women who will appear in those histories will be the
astronomer from Northern Ireland, Jocelyn Bell Burnell. The honor she gathered
with her career is no small thing: together with her mentor, the English Tony
Hewish, she discovered, at the end of 1967, the enigmatic pulsars or neutron
stars.
In an interview that was published in the splendid book by Horace
Freedland Judson, The search for solutions. Joselyn gracefully recounts
how she came to be an astronomer and, of course, how she achieved that notable
discovery.
Although from childhood she felt attracted to the study of the heavens,
when she realized that a good astronomer must be willing to spend many nights
by candlelight, she felt so dispirited (for she was an incurable nocturnal
sleeper) that she was on the verge of seeking another activity. But a little
later she also learned that a new branch of astronomy was rapidly developing
which had its origin in radar and received the name of radioastronomy. The
radiotelescopes allowed emissions to be captured from celestial bodies that
were beyond the red of the visible spectrum. Thus, to the extremely vast
universe that the optical telescopes had unveiled was added an equally vast
universe of objects "invisible" to the light of the spectrum. The great
advantage of this invention, at least for Jocelyn Bell Burnell, was that the
observation of space with the radiotelescope could be performed in the plain
light of day.
The quasars, those strange objects that are found within the universe and
which emit radio waves 100 or 1,000 times more powerful than any other source,
were the first great discovery the radiotelescope permitted, back in the middle
Sixties, and were observed for the first time by Martin Byle and Tony Hewish,
lately the teachers of Jocelyn.
Precisely in a project dedicated to the study of quasars was where Joselyn
joined the team of English astronomers. Her work consisted in recording the
signals deriving from quasars that were received by a great radiotelescope that
had recently been constructed in Cambridge.
The daytime signals emitted by the quasars flickered in a similar fashion
to that of the stars in the night sky due to the solar wind. During the night,
on the other hand, with the sun on the other side of the earth, the radio
emissions of the quasars are continuous. One autumn morning in 1967, upon
arriving at work, Joselyn found signals in the nocturnal register of the
radiotelescope that seemed to derive from a flickering source. That lacked an
explanation. Ironically, from then on, Joselyn spent many candlelit nights
recording the strange emissions that came from a certain spot in the universe.
The signals were so precise in their flickering that at one point she and her
teacher came to suspect that they dealt with messages sent by intelligent
beings, so therefore they called the source producing them, LGM-1 (Little
Green Men).
A while later they discovered that the little green men were, in reality,
the most dense objects there are in the universe: neutron stars.
17 July 200526. Eros and NEAR
THE VARIOUS traditions of Greek mythology are not in accord as to the origin of
the unpredictable Eros. In some, he is given to us as the son of Chaos and,
thereby, brother of Uranus (the Sky) and Gaea (the Earth) who descended from
this same Chaos. This origin has its logic: so that Uranus would fall in love
with Gaea and procreate with her the giants and titans who at first populated
Olympus, the intermediation of the god of Love was necessary. If one of the
fatal arrows of the young god had not pierced the heart of Uranus (or of Gaea)
it would have little served Chaos to procreate his two offspring: the Sky and
the Earth would have passed eternity contemplating it and the progeny from
whose children the gods and later humans descended would never have existed.
Other traditions speak of Eros as the spoiled child of the fearsome
Aphrodite, who used him for the purpose of punishing the mortals who spurned
her by casting arrows of love at them and thereby complicating, to the end,
their existences. Remember, for example, the proud Hippolyte and the cruel
destiny to which the beautiful goddess sentenced him. Since the young man only
had desire for the chaste Artemisia and rejected the promiscuous Cypriot, she
punished him, ordering her son to cast one of his infallible arrows
at...Phaedra, Hippolyte's stepmother. The result: one of the most beautiful
tragedies of Euripides and an unforgettable lesson: with love one does not
play.
Even so, I continue to think that the first version is more true to
reality. Eros would be as old as the world itself if from its beginning he had
been a motor that moved it.
It seems that modern science also agrees with that version. In 2001, after
12 months of orbiting asteroid 433, named Eros (which, in fact, has the shape
of a potato), the space ship NEAR (for its initials in English: Near Earth
Asteroid Rendezvous), settled onto the polished surface of the asteroid. It
was the first time that an object elaborated by mankind had contact with one of
those space bodies. The spectrum of gamma rays that NEAR displayed on board
revealed that Eros contains little iron and aluminum compared to the great
quantity of magnesium which comprises it. A ratio of that type is only found in
the Sun or in some meteorites called chondrites, which are found to be among
the oldest existing objects in the solar system.
These observations, then, suggest that Eros was formed over four and a
half billion years ago and has changed very little since then. Furthermore,
NEAR also found that Eros does not manifest a magnetic field, as opposed to the
ferrous meteorites. The latter surely were created through the fragmentation of
celestial bodies and are more numerous in a belt of asteroids.
Thus then, NEAR had the luck of approaching an object much more
interesting than ordinary meteorites (asteroids that have landed on the surface
of the earth) which we have known quite well for a long time. Eros is almost
as old as our sun and indubitably carries many secrets in its venerable
structure concerning the origin of the solar system, in the same way that the
young god carries in his quiver the mystery of life.
15 October 2005CHEMISTRY27. Of words and elements
I HAVE OFTEN been asked about what relation there may be between two activities
so dissimilar as chemistry and literature. With time I have been polishing an
answer that, of course, is much more literary than scientific in the same way
that from 29 letters it is possible to construct a virtually infinite number of
words from which, in turn, it is possible to construct, describe, transform,
re-create, and imagine the world, and from the 92 elements that exist in nature
it is possible to construct a plethora of substances containing them. The
writer creates a world with words; the chemist, more modest, sometimes creates
substances out of elements, otherwise is content with identifying the elements
that comprise substances and showing how they are arranged in space; yet some
of those substances, be they created, like aspirin, or discovered, like
penicillin, can furthermore transform the world.
In other words, what connects both activities could be the witchcraft: a
good writer performs magic with her words; a good chemist, with what (s)he does
with compounds.
This analogy, which certainly has been very useful to me when interviewed,
ultimately has not managed to convince me. I feel that what fails in it is the
parallel I draw between a letter and an element. For, as much as I force
myself, a letter seems to me nothing more than an absolutely impersonal sign.
In an element, on the other hand, I find much more. A letter only acquires
weight when it joins with others, while an element, in and of itself, has a
meaning and, we would dare say, even a personality.
Like sodium, for example, that unstable metal, always eager to release the
solitary electron which occurs in the orbit most distant from the nucleus, I
imagine as a red-headed meteor (occupying the 11th place on the Periodic Table,
the color its flame emits is an intense orange), hyperactive and unstable, who
only can succeed in remaining calm if the mass that weighs on its conscience is
expressed. At the other extreme, I imagine radon, that noble gas which is found
almost at the end of the Table, like a venerable and almost ethereal artifact
completely divorced from earthly desires, radiating (it is radioactive) the
infinite wisdom that only absolute repose can provide; I see it like an
impassive Buddha in the family of the elements.
Between both extremes there are a group of elements which, despite their
names (since the lovely "la plata" is the only element with a feminine name)
I consider undoubtedly feminine, those that form the sixth and seventh rows in
the Table. Oxygen, in the very first place, comes to my mind as a quick-moving
(and fast-acting) beauty, always avid to receive electrons, those subtle
leftovers which the decomposition of sodium and its gang of alkalines so easily
leaves. Competitive with oxygen, for brilliance and beauty, is chlorine, a
being perhaps even more appetitive yet neither as young nor as robust as her
rival. Silicon, mineral and stable, even though it is in the sixth row, I see
as a sort of giant who sustains the world; or preferably who constitutes it,
since this simple element, in conjunction with flighty oxygen, comprises the
earth we walk upon. One could also mention the friendly and loving advances of
carbon, capable of uniting even with itself, with unpredictable oxygen, with
odorific nitrogen, and with tiny hydrogen (the smallest member of the elements
and at the same time that which contributes to all) to form the myriad
substances enclosing the secret of life.
The next time they ask me what I mean by the above I shall answer that, in
truth, there is a literary world in the chemical. I think that might be the
best approach.
12 February 200528. The symbols for the elements
ONE COULD say that physics attained adulthood during the second half of the
17th century, with the publication of Isaac Newton's Principia. Since
then, with a solid, theoretical reference point and the fundamentals of a
common language firmly established, that science has undergone an impressive
development which has led it to investigate (and in many instances explain)
that from the world of the incredibly small to that of the unimaginably large,
covering, in passing, all the phenomena that occur at our own scale.
Therefore, it could be said that physics' younger sister, chemistry,
reached adulthood at 100 years following, in the second half of the 18th
century, the publication of Lavoisier's Traité élémentaire de
chemie. But, as distinguished from physics, the theoretical reference point
from which chemistry began was not as solid and its language was still quite
confused and archaic, for it inherited many terms and even concepts imported
from alchemy, a beautiful, enigmatic and arcane hermetic philosophy, yet really
not a science.
To illustrate this, it suffices to say that long after the publication of
Lavoisier's work (1789) the chemists still had not come to agreement as to the
names of many elements and compounds, and even less on their symbols, since
realistically, almost every chemist invented their own symbols to represent
chemical substances. And so toward the end of the 18th century there appeared a
book compiling the different names and symbols that had been published in the
chemical bibliography of the day; 35 different names appeared in it and 20
distinct symbols for mercury, for instance.
At the beginning of the 19th century, the great English chemist John
Dalton, to whom we owe, among other things, the modern formulation of the
atomic theory, attempted, drawing upon his great prestige, to impose on his
colleagues the symbols he had designed. It consisted of curious graphics made
of circles containing points and arrows who composition depended on the
element. The proposed system did not please his contemporaries: the language of
chemistry was complicated enough without making it more torturous with those
strange symbols which required, further, a sharp visual memory to manipulate
them. Almost at the same time another chemist, with a certain name in the
scientific circles of his era, proposed some symbols distinctive for their
simplicity and clarity. One simply had to take the first letter of the Latin
name of the element and write it with a capital: thus, hydrogen would have the
symbol H, oxygen O and carbon C, for example. In the event that the first
letter has been utilized, then the second would be added in lower case: calcium
would be Ca and helium He.
To represent a compound, it would be enough to unite the symbols of the
elements that comprise it: water (which has two hydrogen atoms and one of
oxygen) would be H2O and ammonium, NH3.
He who proposed such an effective and practical method (that still is
being used) was a notable Swiss chemist named Jöns Jacob Berzelius (1779-
1848). Contrary to what we might expect, Berzelius was not a practical person.
From early youth, and to escape from the poverty that assailed him, he embarked
upon enterprises which in that time were considered absurd. For example, he
ceased practicing medicine to dedicate himself to the business of bottling
mineral water. He failed precipitously; who, except for Berzelius himself,
could have it in his head that bottling water would be a good business?
Different times, as who can doubt.
19 February 200529. A man of his time
IN THE year 1527, in Basel, Switzerland, the printer and humanist Johann
Frobenius, who enjoyed great prestige in Protestant intellectual circles of his
era, fell gravely ill due to a poorly treated leg infection. His doctors
concluded that the only way to save the life of the patient was amputating the
infirm member. When they were on the verge of doing the surgery, a friend of
Frobenius recommended to him that he should consult a young doctor recently
installed in the village and who was beginning to be known for his excellent
behavior, the bold thesis he defended from his post at the University of Basel
and the prodigious cures that he had achieved using mineral substances.
The young doctor arrived at the home of the printer and expelled the
venerable doctors and their surgical assistants, then, reviewed the patient and
diagnosed that the leg was not yet gangrenous, so that it was still possible to
cure it with medications. He prescribed for him a series of compounds that he
himself prepared and to apply certain ointments to the wound, also prepared by
him.
The result was astonishing: a few days after the visit of the new doctor,
the master Frobenius resumed his morning walks. When a beloved friend and
client of the editor, the great humanist Erasmus of Rotterdam, learned of the
miraculous intervention by the mysterious professor from the University of
Basel, he wrote him an effusive letter, which concluded with the moving
phrase: "You have saved Frobenius, who is the half of my life, from the world
of the shades."
It suffices to say that the cure of master Frobenius accelerated the
nascent fame of the young practitioner, especially within the circles of the
Protestant thinkers who in those times were sustaining an arduous struggle
against privilege, corruption and the anachronistic thought of the hierarchy of
the Catholic church.
In turn, the irreverent doctor unleashed his own battle against the
medical practices of his day, which were based upon the undeniable teachings of
the wise Greeks, Romans and Arabs who had died hundreds of years ago.
Proud of his successful intervention in the case of the famous editor and
from the hundreds of letters that he received from his patients to thank him
for curing the terrible syphilis thanks to prodigious doses of mercury, he
dared to display his attitude toward the old medical science with a bold
gesture he enacted before the astonished eyes of his students and some curious
onlookers in that same 1527: he threw onto the bonfire that the university held
every year in front of the Basel cathedral to celebrate the end of the school
year, an ancient medical text written by the mythical Galen and another by
Avicena, the brilliant Arab disciple of Aristotle.
The man who performed that symbolic act, which well might mark the origin
of the new science and the inevitable decline of the old, had the impossible
name of Phillipus Aureolus Thoephrastus Bombastus von Hohenheim. To our good
fortune, he himself decided to substitute that bombastic designation with the
pseudonym Paracelsus, which much better reflects his rebel spirit:
Paracelsus means "superior to Celsus," and Celsus had been a Roman medical
doctor, as celebrated during the Middle Ages as he is unknown today.
Paracelsus was the last great alchemist and the first modern scientist;
yet he was, above all, a man of his time: vital, innovator, violent,
iconoclastic, sublime and grotesque, as was the Renaissance itself.
Son of a doctor and of the superintendent of the Einsiedeln, Switzerland
hospital, Paracelsus (c. 1490-1541) grew up and was schooled among doctors and
patients. Perhaps that decided his vocation: although he engaged in many and
varied activities, all, in one form or another, were related to the Hippocratic
science.
He pursued the medical curriculum at the University of Basel. His critics
claim that he never got his degree. However that may be, it did not impede him
from delivering a sermon at that same university some years after having
studied there.
When he emerged from the university, he traveled to the south of
Switzerland, the Tirol, and worked for a time in the mines existing in that
region. There he had the chance to understand in depth the art of mining, with
its obvious difficulties and dangers; the nature of the mineral substances, and
the sicknesses that afflict the miners.
In that epoch they became familiarized with the alchemy of the Arabs,
which had included from mineral salts to the classical elements of sulfur and
mercury in the composition of the material principles of the geologic stone.
Armed with this knowledge, and with the experience he had acquired concerning
the mine's products and the diseases derived from them, Paracelsus developed a
peculiar pharmacopeia grounded in mineral baths, opium, mercury, lead, sulfur,
arsenic, and copper sulfate and popularized the use of alcoholic tinctures and
extracts. We have seen that the pharmacists who worked with those substances
had had a certain success curing syphilis and they saved the master
Frobenius' leg.
When he returned to Basel, he imparted lessons in the university about the
procedures he had developed. In line with the spirit of the Reformation free in
Switzerland, he delivered his lectures in German and not in Latin as had been
consecrated by custom.
Soon his novel and irreverent ideas, heightened by an eccentric and high-
strung character, created serious conflicts for him with his colleagues which
obliged him, in 1528, to abandon the university and the city and begin a
lengthy peregrination through many locales of central Europe, including Colmar,
Nuremberg, Appenzell, Zurich, Pfäiffers, Augsburg, Villach, Meran,
Middelheim, among others, until in 1541 he found refuge and protection in
Salzburg, thanks to the invitation made to him by the archbishop of the area
which, although Catholic, were devoted admirers of the Swiss doctor.
The enjoyment of being well received did not last long: that same year,
perhaps exhausted by the vicissitudes he had suffered during the more than ten
years of itinerant exile, his body gave up and he exhaled his last breath the
24th of September of that same year. His enemies, who always had been abundant,
attributed his demise to alcoholic congestion; meanwhile his admirers
maintained that Paracelsus met his end thrown down a steep embankment at the
hands of some emissaries sent by doctors and pharmacists jealous of him.
Whichever it may be, he was buried in the holy ground of the church of San
Sebastián in that city. There his remains rest, in the shadow of the
monument that was erected in his honor in 1752, when sufficient time had
already passed to digest the life and work of this singular personage.
The ideas of Paracelsus are a strange mix of old medieval precepts with
novel New Age discoveries. They could never free themselves from the canons of
the ancient world; yet, in turn, they inspired the beginnings of modern
science. It may be that, in the long run, they did not add anything important
to knowledge of medical science; yet they nevertheless revealed an attitude and
intention that would be fundamental for modern medicine.
In the works that he wrote during his forays, Paracelsus shows a genuine
desire to promote the progress of medicine, although his abilities were not
synchronous with his desires. He recommended simplicity in medical practice,
yet his prescriptions were extremely complicated; he exalted observation and
experimentation and affirmed that the doctor personally, and not through his
assistants, should supervise surgical interventions, yet he refused to direct
any operation except for lithotomies while, at the same time, proposing various
new theories about the art of surgery.
Fundamentally, his system was based on a Neo-Platonic philosophic vision
in which the life of mankind is seen as inseparable from the life of the
universe. For him, the Biblical limus terrae, the dirt from which man
was created, is in reality a composite of all the beings previously created.
Basically it is a compound of "salt," "sulfur" and "mercury," and the
separation in the human body of those mystical elements is the cause of
sickness. Such doubling is due to bad functioning of the Archaeus, a
vital occult force that is situated in the stomach, whose function is to
separate the useful from the poisonous. We have seen that, to treat illness,
Paracelsus introduced mineral baths, added opium, mercury, arsenic, and other
mineral salts to the pharmacy, and popularized the use of tinctures and
alcoholic extracts, all with the goal of helping the Archaeus to
adequately accomplish its functions.
Given that the human body contains all the elements and needs them to cure
its infirmities, the doctor, thought Paracelsus, should know the physical
sciences and alchemy; similarly, he should know astronomy, because the stars
not only influence health, but also man, like all material beings, is permeated
by astral spirit. Finally, the doctor should know theology since, as well as
body and spirit, the human being has a third factor, the soul, that was created
by God, and for which the spirit is a sort of body.
Around this singular character who, as we have seen, structured a unique
system based, on one side, upon the spirit of observation and experimentation
of modern science, and upon the eldest medieval superstitions on the other,
many legends have been woven. Perhaps the most notable is of a manuscript
attributed to him where he details the growth of a human embryo developing in
the uterus of a mare: the sperm of a man was placed in the womb of the animal
and a handsome fetus of almost seven months results which, unfortunately, died.
In the document the experiment is described and is even accompanied with
illustrations. One might judge that in fact it was completed notwithstanding a
small detail, that the maternal egg to form the embryo is not mentioned, simply
because in the times of Paracelsus this sexual cell was unknown.
Manuscripts are also attributed to him where the formula and the
experimental procedure appear that lead to the philosopher's stone and the
elixir of eternal youth. Probably this is because during the 17th and 18th
centuries Paracelsus was transformed into a legend among the alchemists, as had
happened thousands of years previously with "Thrice Great" Hermes, and to him
were attributed whatever novelties were emitted by the devotees of that arcane
practice.
Be that as it may, it is unpleasant to think that the Swiss doctor had
been a great liar; we prefer to suppose he was a great dreamer.
26 February 200530. One must learn to dream
NOT ONLY in literature and art is the capacity to dream, to aim the imagination
beyond the physical world that surrounds us, the usual charge for the creative
act; also in science it is common that dreams guide the task of the dreamer.
Here we see a fine example.
Benzene, the volatile hydrocarbon discovered by Faraday in 1825 and
obtained for the first time in the laboratory beginning with coal tar by A. W.
Hofmann in 1845, can be considered one of the fundamental substances of organic
chemistry. From its derivatives can be obtained compounds with diverse uses
such as colorants, medicines (the celebrated aspirin is one of them) or
explosives (TNT is the most well-known).
During the time when the peculiar characteristics of that substance were
beginning to be discovered, the German chemist Friedrich August Kekulé
(1829-1896) worked on the valence theory, that is, on the way in which the
atoms of two or more elements link to form a compound. His investigations had
led him to conclude that carbon is tetra-valent, i.e. that one atom of this
element could combine with four other elements; he also affirmed that one, two
or three valences of the carbon atom could join another carbon atom and thereby
form chains.
His theory permitted the establishment of the structural formulas for a
great number of hydrocarbons, which is what, in turn, allowed other researchers
to obtain innumerable new compounds.
During the boom in what could be called organic chemistry, whose results
in the fields of medicine, of polymers (plastics) and energy revolutionized the
daily life of the 20th century, the condensed formula was discovered for
benzene: C6H6. Thus, in some fashion six carbon atoms had to be united with six
hydrogen atoms, while in accord with Kekulé's theory, the carbon maintains
its valence of four.
Before the apparent shortage of the hydrogen, it was a true challenge to
propose a formula articulated for that substance, attributing a valence of four
to the carbon. Kekulé confronted the challenge and after many attempts
overcame it.
He liked to recount how he arrived at the solution to the enigma. After
various months of sterile efforts to capture the elusive formula, one night he
had a visionary dream: he dreamt of six charming spider monkeys who danced,
holding hands and with their tails erect, forming a circle that rotated
endlessly. At first he gave no importance to the dream; yet it repeated various
times, which caused him to suspect it contained some message.
At last, one morning in 1865, Kekulé decoded that which his
unconscious was telling him: the interlaced monkeys represented the carbon
atoms in benzene united among themselves (for two valences), the tails were the
unions with the hydrogen atoms (one valence) and the continual turning was the
fourth valence, which alternately shared the six carbon atoms.
The formula for the benzene molecule was unveiled. Kekulé depicted it
as a hexagon with a carbon atom united to one of hydrogen at each vertex, and
with a circle inscribed in the hexagon that represents the shared valence. To
this day, that is the most common way of portraying that substance.
"One must learn to dream," Kerkulé would note every time he concluded
his story. We are sure that that thought is worthy and opens more
possibilities for fruitful existence beyond multifaceted benzene.
6 March 200431. A mirror image
AT THE 2000 Hanover fair there was a sort of telephone booth where one can see
one's entire body reflected in a mirror. When I went into the area and saw my
figure reflected in it, at first I did not get what was the point of that
apparatus. I was looking at myself for a while and it did not take me long to
notice something strange about the image which looked at me from the other side
of the mirror: it was I whom I saw, in effect, but there was something odd
about my reflection that made me think it was not completely me, but that whom
I saw was something like an imperfect double of myself. At last, upon seeing
the reflection of my left wrist, I observed that my watch was on it, on the
left wrist of the image; that is, the image of my left arm was not in front of
my left arm, as occurs in any mirror, but in front of my right arm. I then
understood the nature of that device: through an optical illusion (based upon
some side mirrors that I discovered later) the reflected image was not that of
a mirror but instead the true image, such as one might see in a photograph.
I spent a long time considering the time my mind had been distracted,
thinking how little in reality we physically know of ourselves. The image of
the face and of the frontal part of our body, in general, we see in a mirror
and what we see (I now became aware) is considerably different from how we
actually appear. The back side of our body is even more unknown to us; we
rarely see it and when we do, it is through a mirror, which means, strictly, we
do not see it as it is. Something similar occurs with our voice: that which we
are accustomed to hear is different from that heard by others, since our own
voice is heard from the outside and from the inside, which produces in our
hearing a distinct tone and timbre from that heard when the voice comes from
the outside. That is why all have had the experience of not recognizing their
voice upon hearing it on a recorder, and find it strange, as the image of
myself that I saw in that apparatus in Hanover was strange.
A phenomenon also came to mind that occurs in some organic substances:
optical isomers. This has to do with substances whose molecules have the same
elements and the same amounts of them, yet arranged in such a way that their
structures are the mirror images of each other, which the chemists call
enantiomers. Despite being identical in everything, except the order in which
their components are placed, these substances can have very distinct, even
antagonistic, physical and chemical properties. Glucose, for example, has two
isomers: dextrose glucose and laevulose glucose. They are mirror images of one
another, the only difference being the place held by one of its components (one
to the right, the other to the left). Notwithstanding, when these molecules
unite to form long chains, those that are D-isomers form starch, the
fundamental substance to provide calories to living beings. We all know how
starch is: an amorphous white powder. On the other hand, the L-isomers form
chains of cellulose, that is, the wood that comprises part of vegetation.
Cellulose (except for some bacteria and insects that are capable of digesting
it) does not provide nourishment to living beings.
I cannot help thinking that if the being whom I see every day in the
mirror were to fill with life, it would be an individual very different from
myself, perhaps antagonistic and who knows what chains of relations would
develop.
7 May 200532. Science and patience
IT MAY be that the most important attribute an individual should have who
decides to dedicate themself to scientific activity would be patience. The
history of scientific projects is plagued with deeds which testify to this.
Perhaps the most famous case would be that of the U.S. physicist Robert A.
Millikan who labored for more than ten years and performed innumerable tests to
succeed, at last, in experimentally measuring the charge/ mass relation in the
electron. Another bit could be said of Paul Ehrlich, the discoverer of the
first substance that chemically attacked syphilis, Salvarsan 606, so
called because of the number of substances its discoverer had tested before
arriving at that arsenical preparation against syphilis. Or of the team of
researchers who spent long years literally plunged into the deepest mine in
South Africa trying to detect neutrinos. Who can doubt these tasks being worthy
of Sisyphus.
In 1960 an enzyme called synthetic ATP was isolated for the first time.
Beginning then, scientists from many parts of the world foundered in the search
for its structure and functioning. It was not until 1994, ten years ago, that
the team from the Molecular Biology Laboratory of the Cambridge Council for
Medical Research, headed by John H. Walker, after 12 years of studying the
biochemistry of that substance and its crystallization, succeeded at last in
decoding a key portion of the atomic structure of synthetic ATP.
In addition to being a beautiful example that patience is the most
relevant quality for a scientific task, the discovery by Doctor Walker and his
team opened unsuspected perspectives in medical science.
Synthetic ATP is the central molecule in the generation of energy for
almost all forms of life. This protein motivates (or is the catalyzer for) the
synthesis of adenosine triphosphate (ATP), a substance that warehouses chemical
energy in a special link in its structure, called a high energy phosphate bond.
When this link is broken, or is hydrolyzed, the stored energy is made
instantaneously available. By means of some additional chemical reactions, this
energy can be transformed, for example, into the necessary energy to cause
muscle cells to contract, to convert amino acids into proteins, or to transmit
signals across the nerve fibers. In animals ATP is formed in some cellular
substructures called mitochondria so as to metabolize nutrients. Plants create
ATP in the chloroplasts, where photosynthesis converts solar energy into
chemical energy.
The discovery by John E. Walker will help to answer many questions
concerning the way in which living organisms produce energy. Walker also
predicts that the establishment of the structure of this enzyme could shed new
light on the molecular basis of aging: the mitochondrial genes, which direct
the production of part of the synthetic ATP molecule, mutate in the nucleus of
the cell at a much greater speed than conventional genes. Walker and other
scientists suspect that the mutations accumulate with time as an organism ages.
These changes affect the ability of the organism to produce energy, such that
they perhaps are a key factor in Parkinson's disease, in Alzheimers and other
degenerative illnesses of age.
Paradoxically, time is the one responsible for cellular aging; but it is
time itself, that which the men of science spend with such generosity, that may
soon overthrow old age.
3 July 200433. Lucy turns 50Our outer perception is an inner dream
that occurs in harmony with outer things; and
instead of saying that an hallucination is a
false exterior perception one must say that
the outer perception is a true hallucination.
TAINE
SINCE TIME immemorial man has utilized substances whose origin may be mineral,
vegetable or animal to cure his ailments, to relax, to wake up, to go to sleep,
to eliminate his adversaries and even to dream. The generic name for such
substances is that of drugs. An elastic word that is, for to it equally are
attached the synonyms medicine as well as poison, debt (here in
Mexico) as well as lies (in Argentina) and applies equally to
soft (Indica hemp, for example) as to hard (heroin and cocaine).
The alkaloids comprise the most well-known family of drugs. They are
called this due to the base or alkaline quality of the salts that are formed
with them and almost all are substances of vegetable origin (some are of animal
origin) which are characterized by causing some physiological effect on the
human body.
In 1816, the German chemist Sertürner announced that he had
discovered a "new organic compound, alkaline, apparently related to the
ammonium with which various salts had been prepared whose physiological action
was harmless." We deal with morphine, which Sertürner had obtained from
opium, which in turn is obtained from the fruit of the poppy. It was the first
alkaloid that had been isolated in a laboratory.
From then on the 19th century chemists dedicated themselves with true
frenzy to seek the substances that became to active principle of myriad drugs
already known and even many others that were discovered along the way. They
also observed a curious relation between the metabolism of the plant and the
place in the body where the alkaloid accumulates. Thus, it was found that in
perennial species it accumulated everywhere, but especially in the bark and the
shell of the fruit (as with quinine and morphine); in the leaves and seeds of
the annual species (as with cocaine and strychnine), and in the roots of
biennial plants (as with nicotine). This caused many scientists to think that
in reality the alkaloids are destructive substances that do not serve the
plant. Others believe they fulfill a defensive function for the species.
However that may be, more than 800 alkaloids are known today thanks to the
work of those men, in addition to many others that were discovered or
synthesized during the past century. The knowledge and utilization of these
substances began a revolution in health sciences comparable to the industrial.
Although today the name that denotes them is sufficiently demonized because of
the boom that the use of hard alkaloids has had beginning in the second half of
the past century, it is fair to recognize that many human beings have carried
out a longer and healthier physical or mental life thanks to them.
I wanted to recall now that one of them, certainly synthesized, which
became a sort of milestone in the Seventies, which opened the doors of
perception and illumination for many and also those of hell for many of the
confused, which even inspired one of the Beatles' most famous songs, is going
to turn 50 years old.
In 1955, the Eli Lilly & Co. laboratory announced the synthesis of
lysergic acid "from which is derived a very useful alkaloid in the treatment of
high blood pressure and migraine."
15 January 200534. From the "Aeolipile" to the automobile
THE FIRST thermal machine (device capable of transforming heat into movement)
is due to the talent of Hero of Alexandria (c. 130 b.C.). The Greek called his
invention the "Aeolipile," which is described in detail in his book
Pneumatica. It deals with a curious artifact that consists of a crystal
sphere supported at two opposite points to enable it to rotate freely and with
a pair of thin, curved tubes whose respective orifices are also opposed. If one
put a little water in the sphere and externally heated it, the vapor exited
pressurized in the small tubes causing the sphere to revolve at great velocity.
In his book Hero himself describes another apparatus that surely also was a
thermal machine. He does not well describe its functioning, but indicates that
it was a device capable of causing the doors of the altar in a temple to open
with the single act of igniting the burners placed at each side of the door;
upon extinguishing them, the doors closed. Most probable is that, from the
flame of the burners, water would be heated in a secret compartment and the
pressure of the steam would cause the doors to open. Upon turning them off, the
water would cool and condense and the pressure of a vacuum would cause the
doors to close.
These surprising inventions, which concealed in their heart the key to one
of the most powerful and efficient means to transform energy did not advance
beyond being curiosities, mere toys. Perhaps the abundant workforce provided by
the system of slavery practiced by the Greeks and Romans did not necessitate a
search for alternative sources of energy.
Almost two millennia had to transpire for attention to return to the force
of heat. Now it was an Englishman, Thomas Savery, who employed thermal energy
to drain a coal mine. His machine, which in 1698 earned him a prize offered by
the mine owners, functioned in a very similar manner to the artifact that Hero
used to open the altar doors: it simply produced steam from water that was
transported to a large still; when the still was full of vapor (that is, when
the steam had displaced the air in the container), it cooled, such that, upon
condensing, a vacuum was created in the receptacle. It was sufficient to open
the spigot to some pipes that connected the still to the water that flooded the
mine for this to be sucked towards the receptacle by the pressure of the
vacuum.
Savery's device was notably efficient as compared to the habitual measures
then employed to drain the mines, yet even so it consumed enormous quantities
of fuel.
A countryman and namesake of Savery, Thomas Newcomen, perfected the
invention to utilize the pressure of the vapor to move a piston (a device that,
in turn, had recently been invented by Denis Papin). In 1705, exactly three
centuries ago, Newcomen announced his new invention: the steam engine.
Perfected 50 years later by James Watt, this artifact became responsible for
the Industrial Revolution and its direct consequence: the new economic,
political and social order that would be established, first in Europe and then
in the rest of the world, at the outset of the 19th century and which we
continue to unfold.
Paradoxes of science: an invention that was conceived to rationalize the
use of energy has exhausted, in only two centuries, much more fuel than mankind
has consumed since its origins as a species. And it is running out.
9 April 2005BIOLOGY AND EVOLUTION35. Why do women have breasts?
I REMEMBER that I once read a commentary by Bertrand Russell referring to the
obsessive precision of men of science compared to the ambiguous complacency of
political discourse. He said this, approximately:
Whereas a scientist, in reporting the results of some research,
accompanies the collected data with rigorous specifications of its
margins for error, the politicians document the supposed achievements of
their actions or specify the figures for their future enterprises in
round, absolute, unquestionable and, of course, almost always wrong
numbers.
But, in the end, it is not of this that I wished to speak. It is more
about the first, that is, about the obsessive precision of men of science. I
happen to be translating a splendid book by Paul H. Ehrlich, a well-known
evolutionary biologist at Stanford University, titled Human natures. As
the name of the text suggests, Ehrlich does not believe there could be a human
nature, unique and universal, toward which we could always appeal to justify
our defects or praise our virtues. In his extensive book, Ehrlich attempts to
decode the entire interaction between biological and cultural evolution, which
has given the modern human natures as a result, with all their richness and all
their misery. An ambitious project, who would doubt, which the author
undertakes with a rare mixture of erudition, geniality and humor, with the
result that the task of translating it has been transformed to a pleasure.
And it is in this pleasant task where I have been surprised by the moving
obsession of the scientist to be precise, not to affirm timorously, by always
starting with words such as "seemingly," "supposedly," "in probability," etc.
in sentences that refer to an observation, hypothesis or theory that is not
fully demonstrated. It likewise surprised me to know how many things science
still ignores. In fact, this spacious compendium of human evolution that is
Ehrlich's book is also a sum of conjectures and speculations which tries to
shed some light on many known, however not explained, facts.
For example, it is known that women are the only primate animals who, when
they are sexually adult, have permanently distended mammaries; it is also known
that this occurs because some of their 100,000 genes activate a process that
stimulates the development of the tissues which comprise the breasts; and last,
it is known that that gene is dominant because, over the duration of some
200,000 generations, the females who carried it had more opportunities to
survive and reproduce. That which is absolutely unknown is why did this
selection occur, meaning, why did the females with full chests among our
hominid ancestors have more chance to survive. The first explanation that
comes to mind, i.e. that the size of the chest will be in proportion to the
quantity of maternal milk produced, cannot be sustained: there is no connection
between them. The second, apparently very plausible, is that developed
mammaries were very conspicuous charms of sexual attraction in individuals who
walked upright. Thus, the females with more developed breasts were more
attractive to the males and, thereby, had more opportunities to reproduce
themselves. Unfortunately, what occurs in modern populations contradicts this
hypothesis: prominent breasts are of sexual attraction only in societies, such
as ours, where it is customary to cover them. Among the populations accustomed
to have the torso uncovered, the sight of the pectorals provokes no emotion in
the males. Ehrlich cites a third hypothesis, sufficiently ingenious yet which
seems implausible: the prominent chest was a comfortable pillow for the
offspring whom the women gatherers almost always carried with them. In this
way, the women with better pillows had more probability of nurturing their
offspring and, with that, of transmitting their hereditary characteristics.
The situation is that no scientist knows, clearly and unambiguously, the
reason for the existence of those marvelous feminine appendages (I am still
convinced that they are there so that Rubens and several others could paint
them); however, and this is an example that many politicians should follow, the
men of science have no problem with admitting their ignorance.
29 August 200336. Ah! loveI
NATURE HAS unfolded an infinity of methods and techniques so that living beings
can fulfill the most fundamental of their functions: to reproduce. These range
from a simple chemical stimulus that causes a bacterium to split in two,
through the contact at a distance that two plants have through the medium of an
insect which transfers the pollen that is produced in the anther of one to the
stigma of the flower of the other, until arriving at an enamored youth who
writes lyrical poems to his beloved with the goal of getting her to open her
heart to him, to finally be enabled, through a prolonged embrace, to deposit
his sperm in the depths of his female's uterus.
The mystery of reproduction has always fascinated me. It may be because it
is there, at least for me, that the genesis is of the most universal of human
sentiments, of the concept to which the most words have been dedicated in the
literature of all times and all countries, that which has generated the most
art and music and which has sustained many religions: that of love.
In the case of the animals, evolution has perfected an instrument, to give
it a name, that guarantees encounters between two beings of the same species
but different genders with the purpose of perpetuation: that is desire. We were
on the verge of saying that that last word is too human to account for a
phenomenon so universal in nature, yet in the final analysis I told myself: and
what word is not too human? I shall call desire, then, the evolutionary
instrument that leads creatures of the same species but different genders to
pair off with the goal of reproducing.
So then, this desire can manifest itself in two forms; one of them more
primitive than the other, or if you like, less evolved, and therefore
corresponding to less complex species in biological terms: the desire to
reproduce versus the desire for the other.
Consider the case of the arachnids. Indubitably it is not desire for his
companion that leads a male scorpion or spider to couple. It is well known that
the arachnids are one of the most ferocious and perpetually hungry groups in
the animal kingdom. When a scorpion approaches a female of his species to sow
his sperm in her, he literally plays with his life in the attempt. It must
employ all its cunning and cleverness (the females are in general larger and
stronger than the males) to take its counterpart by surprise and suddenly clasp
her claws with his. Thus connected, with weapons neutralized, the pair begins
to dance. They move forward and back with their tails straightened, sometimes
intertwined. After having cleaned the floor by their dancing, the male removes
a packet of sperm from beneath its thorax and deposits it on the floor. Still
attached to the female by the pincers, the male shakes her and lifts her
forwards, until her sexual aperture is dragged directly over the packet of
sperm. She recovers it, and now arrives the most difficult part for the unhappy
male: to release and quickly depart from the encounter before his lady lunches
on him. Sometimes he does not achieve this and ends his days in the belly of
his wife. Yet in terms of the success of the species as such, those disasters
are of small consequence: the loss of its life after, not before, having
completed his endeavor.
Other species, such as our own, do not manifest a desire to reproduce, but
instead more to couple oneself with a member of the opposite sex. Although at
times we may be disposed to risk our lives to obtain the other's favors, as
much as it may upset the venerable hierarchy of the Catholic church, it is not
the desire to have children that impels us to do so. What we really desire is
another (or the other). Seen otherwise, that which we desire in reality is to
pair off, not to reproduce ourselves. Will love begin here?
4 September 2004II
The birds, those magnificent beings that have been capable of defeating
implacable gravitation and transporting themselves to the heights of the
heavens, without ever suffering from vertigo and from the permanent fear of
falling to the earth that we undergo when humans travel by air, a species
condemned by its evolution to remain at ground level, but whose ingenuity
permitted them, up to a certain point, to emulate the feathered bipeds. The
birds, it has been said, have certain similarities to ourselves with regard to
their sexual habits and cohabitation.
Many species of birds do not look for a member of their species of the
opposite sex in order to consume them or quickly and fearfully deposit their
sperm in it with the goal that fecundation occurs and then to abandon them.
Instead they search, like us, for a mate with whom to cohabit, like us incubate
the products of the encounter, raise and educate them when they leave the shell
and let them go when they reach maturity; to begin again, permanently together,
a new cycle. Many species of birds, then, like ourselves, form families.
Although we shall see below that not all are so monogamous, given that the
males of certain species are as promiscuous as a sultan (with this comparison
it becomes clear that in this too they resemble us).
What all share, monogamous or promiscuous, and which they also share with
us, is the attraction for the other. They have many means of attracting the
opposite sex. The smallest birds, timid and fragile, those that are an easy
feast for the predators, and who therefore live almost always hidden in the
thicket of the foliage, utilize song to attract their mate. Few animals manage
to emit such beautiful and harmonious sounds as do these minuscule birds.
Perhaps one would have to seek in music another of the arcane origins of love.
It is difficult, for example, to hear a good soprano or good tenor sing an aria
of Mozart or a Schumann cantata and not feel something resembling a profound
amorous yearning.
The largest and most powerful species that do not need to hide to survive
are more likely to use physique to attract females; now it is not the song of
the peacock, harsh and monotonous, that attracts the peahen but instead its
splendid plumage, a hallucinatory ornament that may weigh more than the rest of
the bird and which makes him almost useless although, to be sure, absolute
master of a harem of starved females. The more decorated is the male of a
species, the more promiscuous is that species and the most distant from the
idyllic idea of family life that we have of the birds.
At the other extreme, there are birds that are strongly monogamous and
never separate once they have met. In these cases it is neither the plumage nor
the song that brings the pair together. Strictly speaking it is unknown what it
is that attracts them, for they are so similar between themselves that not even
they themselves can identify the opposite sex. So it is with the penguins,
whose elegant smock has the same cut and color in both females and males. So
that, when one of these charming animals searches for a mate, he takes a small
stone in his beak, aligns himself in front of the other who is alone and
solemnly deposits it at their feet. If he receives a pecking, he knows he has
committed a terrible error, involving another male. If its offering is received
with indifference, then he has met a female who is not ready to come over or
which already is paired. He retrieves his underappreciated gift and departs
from there. But if the stranger receives the present with a deep tipping, then
he has found his true mate. The tipping is returned and the two extend their
necks and bellow a nuptial chorus to celebrate. Isn't it true that they seem
like ourselves?
11 September 2004III
Perhaps the explanation of how love originated and how humans understand it is
not in the characteristics of sexual habits and cohabitation that we share with
other animals but instead in those which are unique to our species. I refer to
three.
The biologist Paul H. Ehrlich asserts that woman is the only female mammal
who is receptive every day of the year. As opposed to their cousins the
chimpanzees, the gorillas and the orangutans, who manifest sexual urgency
during short periods and a few times per year, precisely during those times
when their eggs have matured to be fertilized, our women are capable of
receiving the embrace of the male at any moment. And the males, for our part,
are disposed to join with them at the slightest provocation. Therefore perhaps,
as Ehrlich himself notes, the theme of sex is, for many, that to which we
dedicate the most time in our thoughts. Not a day passes in our life, from
puberty to old age, they say, without our thinking various, not to say many,
times about sex. I think they are justified; at least in my case, which is the
only one I can relate, his assertion is royally fulfilled.
Yet it was not always so. It is very probable that the females of our
hominid ancestors only were receptive during ovulation (in fact that continues
to be the period during which the majority of females feel the most intense
sexual appetite) as occurs with the rest of the primates.
What was it that led the females of our species to prolong their
fertility on one hand, and hide it on the other? For one must add that women,
as opposed to the other mammals, do not make their receptivity conspicuous (in
general it is the odor that alerts the males that the woman is ready and, in
the case of the primates, as well as the odor, the females display a beckoning
and colorful inflammation of the sexual organs during this period). The
biologists think that, precisely because this state is not obvious in the
woman, that is why men are disposed to couple with them any day of the year.
However, the question persists; strictly speaking we do not know the answer.
The only thing that evolution can tell us is that the female hominids who were
less distinguishable when they were in heat had more opportunities of surviving
than the others, such that, after some 25,000 generations, the traces
disappeared of those attributes that attracted the attention of the male.
It is not possible, then, to know the origin of this change, but we
certainly can draw interesting conclusions from its results. I warn that what
follows are my own lucubrations, and very well could be a string of
foolishness, yet they do not cease being an attempt to decipher the amorous
enigma, which interests me almost as much as sexual desire. When the female is
receptive every day of the year, the capacity to care for the partner (and vice
versa) is accentuated in our species as in no other. The obsession for
cohabitation that fills a large part of our thoughts, sharpened, so to speak,
our capacity to desire the other, the care for them for one thing and, finally,
to love them.
A second unique sexual characteristic in our species reinforces the thesis
that is broached above. According to J. Bronowski's testimony, and I have no
reason to doubt it, humans are the only mammals who make love face to face and
not, as the rest of those animals usually perform it, penetrating the female
from the rear. Furthermore, the same Bronowski affirms, although in this case I
have my doubts, that the woman is the only female in the animal kingdom that
experiences orgasm...
18 September 2004IVDrinking without thirst and making love anytime, ma'am,
is all that distinguishes us from the other animals.
BEAUMARCHAIS
We said that human beings are the only mammals that accomplish the sexual act
face to face. The face is the part of our body that gives us an identity. Such
that, when one wants to commit a misdeed, or an heroic act, it is sufficient to
cover the face so as not to be discovered; behind a mask, we all are more or
less equal. But the face reveals much more than the identity of a person. The
features of an individual, if we know how to read them, tell us much regarding
him; upon the face is revealed, in addition to the physical identity, the
spiritual identity of its owner.
It may be that that is why a person's face is that which awakens an
amorous sentiment towards her and the more intense that sentiment is, the more
intense our observance, we feel more urgency to caress it and, above all, to
kiss it. The shared kiss is the culmination and at the same time the principle
of an amorous relation.
Thus the importance of human beings performing the sexual act face to
face. Was it an amorous sentiment that led our antecedents to change the
posture the mammals use when they copulate? Upon doing that in the latter
manner the only thing that the male can see of his partner is the back, the
shoulders and the head; while the female cannot even see those: her gaze points
in the opposite direction from whoever enters her; in a purely receptive act,
she cannot contemplate the scene of their union. It could be thought it was
they who revolved their bodies to receive their lover from the front, to be
able to view his face, to allow kissing of the lips at the same instant of
being possessed, so converting what was merely an animal act into an amorous
rite. The simultaneity of the kiss and of copulation (possession of the spirit,
possession of the body) is one of the most lovely and enigmatic habits of human
beings, and a good part of the mystery of love, I think, is contained in that
rite.
The third wholly human characteristic which can shed a certain light on
the amorous phenomenon is the fact that our offspring are subject to a long
period of dependency upon the parents, especially the mother, to survive.
Indubitably the origin of the amorous sentiment that we consider most pure and
sublime is in those languorous years when the son depends completely on his
mother: she who is willing to give everything, even life, without receiving
anything in return.
In the enormous force of maternal love our best sentiments are supported
and it is her example upon which the principal religions of the world are
founded. We admire and revere a capacity to love we know is ours yet,
nevertheless, is so difficult to attain that we literally sanctify those who
are capable of fully realizing it.
Because, unfortunately, if our nature gave us the ability to be loving, it
also allowed our being egoistic, fatuous and envious (one would have to see
from what human characteristics those sentiments are derived); but above all we
got the capacity to experience a feeling as intense as love and equally one
limited to our species: hatred.
Are the same characteristics that led us to survey the sublime those which
made us able to kill our fellows even at the cost of oneself dying? Or in other
words, are we so good at hating because our ancestors had continuous periods of
receptivity, copulated face to face and dedicated long years to the upbringing
of their offspring? It is difficult to say.
25 September 200337. The man from Flores
THE ISLE OF FLORES, in Indonesia, is a very peculiar ecosystem. Separated by
what is more than two million years from the Asiatic continent, the species
that live on it have had the chance to evolve without external species
intervening in the process. Something like the Galapagos Islands.
And like in the Galapagos, on Flores Island there are giant lizards. Those
animals could grow to their dimensions by there being abundant plant food for
them and there not existing species to prey on them. There was also a dwarf
elephant there called the Stegodon. This animal, now extinct, had no
natural enemies either, which is why it was not necessary to attain gigantic
dimensions like their cousins in Asia and Africa (size is a form of protection
in some species) so permitting it to rationalize its food functions.
The inhabitants of this island are descendents of the homo sapiens
who began to populate Polynesia 45,000 years ago, and who in turn descended,
like all of us, from the homo sapiens who left the African continent
about 60,000 years ago. These aborigines, just as in all human cultures,
sustain very ancient myths and legends. Among them there is one that refers to
some little men who populated the isle before them which they called the Ebu
Gogo meaning, "grandmother who eats everything."
The tales of the astonishing deeds of the Ebu Gogo have always been
one more of the endless legends which the human imagination has produced, such
that only a couple of months ago a group of Australian and Indonesian
researchers announced the discovery of homo floresiensis or the human
from Flores. This fascinating creature, our distant cousin, only a yard high
and a massive brain compared to the chimpanzee, established itself on the
island during at least the last 90,000 years and when it had only existed for
12,000 gave signs (through tools and the utensils that have been found which
they used) of having a very conscious intelligence.
This discovery, as spectacular as it was unexpected, has suggested many
more questions than answers to the scientists. How did that fragile little
person get there? How could they display so advanced an intelligence with such
a small brain? How were their dealings with homo sapiens, with whom they
no doubt lived? Why did they disappear? Perhaps the answer to those inquiries
is found, lately, in evolution itself. We know that for almost two million
years a group of hominids, homo erectus, left their African birthplace
to populate the Asiatic and European continents (the famous Peking man is one
of their descendents). These individuals, gifted with an intelligence that
enabled them to fashion tools and primitive boats, arrived on Flores Island
about 800,000 years ago, as evidenced by the remains of the most ancient
artifacts found in the area. Might they be the predecessors of the homo
floresiensis? At first sight it appears not: homo erectus was much
larger than the Flores people and his brain size much greater. But if it is
taken into account that the evolution of species responds to the pressures of
its environment, then that possibility is not so remote. Perhaps, as happened
with the dwarf elephants, homo erectus, settled on Flores Island, having
no natural enemies, did not need big dimensions yet instead, a small body which
would permit it to simplify nourishment. Eight hundred thousand years is a
reasonable duration for evolution to accomplish such a work; that is, for
homo erectus on Flores Island to shrink to two thirds their original
size. Yet, why did they disappear, if indeed they have disappeared?
4 December 200438. Compulsive communicatorsThe limits of my language mean the limits of
my world.
LUDWIG WITTGENSTEIN
One of the many questions posited by the discovery of the man of Flores or
homo floresiensis is whether those close cousins of ours had developed a
language similar to that developed by homo sapiens or whether they had
only managed to construct a protolanguage (the archaic form of communication,
based on grunts, gestures and bodily signals which the first hominids seem to
have developed).
Language is so tightly connected with other unique elements involved in
communication among the societies of homo sapiens, especially religion
and art, that some students believe it may have evolved some 50,000 years ago,
in time to be responsible for the great revolution in tool technology that
occurred in that era and which the scholars call the "great leap forward." In
opposition, many other scientists suspect that language evolved over a very
long period, and that some of its most ancient roots perhaps can stretch back
into the history of non-human animals, including birds and frogs, in which the
left hemisphere of the brain is, as with human beings, more involved in
vocalization. It seems reasonable to suppose that, in essence, a continuum
exists between the verbal communications of our simian ancestors
(communications which may resemble those we find even in today's vervet
monkeys, with a relatively small brain, yet who are capable of distinguishing
different predators and utilizing distinct alarm calls for raptor birds,
serpents and leopards) through to completely modern language.
It is evident that if one could demonstrate that the man of Flores
possessed the faculty of speech, the second hypothesis would gain in
robustness, given that homo floresiensis had not achieved, so far as is
known, a technological revolution on the scale of the "great leap forward," and
this would confirm that such a revolution is, in a certain sense, the result of
abilities that homo sapiens had to communicate gutturally and not the
reverse, as was held by the defenders of the idea that language is the result
of the above revolution.
Perhaps the key to deciphering this enigma would be in the cranium of
those beings: if the skulls of a chimpanzee and a human adult are compared it
will be seen that in the chimp language is completely within the mouth, while
in the human being the rear part of the tongue forms the front of the upper
larynx vocal tract, giving it that element of flexibility which allows speech.
The highest position in the chimpanzee's larynx can move upwards and close the
nasal cavity during breathing, such that the air can go to the lungs without
obstructions while food traverses the throat on the other side. The lower
portion of the pharynx of the human adult means that air and food travel a
common course behind the tongue, increasing the risk of asphyxiation. In
recently born human beings the structure is arranged as with the chimpanzees,
so that the babies can breathe and nourish themselves simultaneously without
danger. At an age of a year and a half, when they have already learned to eat
and to breathe, the larynx lowers in order to facilitate speech.
A similar comparison between the cranium of Flores man and homo
sapiens (although that would not be easy to accomplish, for one would have
to seek that descent of the larynx via the traces it may have left in the
anatomy of the cranium of the former) would shed much light upon the theme and
show up to what point our little parents were, in the words of the great
naturalist David Attenborough, compulsive communicators, as we ourselves
are.
18 December 200439. The reptilian or R-complex brain
ON THE SUBJECT of the resurgence of the P.R.I. dinosaur, a book comes to mind
that I read now many years ago and which impressed me profoundly. It concerns
The dragons of Eden, written by the great revealer of science Carl
Sagan; a book that made him certainly deserving of the 1978 Pulitzer Prize.
The work offers an intelligent panorama of the human brain's evolution, a
process that began around 500 million years ago beginning with a small
protuberance that developed on the extreme front of the spinal medula of fish
in that period. Natural selection and the run of the years were responsible for
that small swelling growing in size and complexity until finally becoming our
resplendent brain, indubitably the most elaborate, intricate and enigmatic
machinery that nature has created.
Now then, the theory of evolution tells us that nature is much more
thrifty and efficient than the neoliberal governments, since it never wastes
that which took millions of years to create; that is, each new evolutionary
stage of an organism contains and includes the previous stage, in a similar way
to what occurs with the skin of an onion. That is to say that the small
protuberance in the fish of the Ordovician period still exists in the deepest
recess of our brain. As exist, of course, all the subsequent evolutionary
stages, until arriving at the neocortex or neopallium (the famous grey matter),
a stage fully developed in the mammals and which in human beings is more
evolved than in any other species.
On this point, Sagan cites the work of a researcher named Paul MacLean:
MacLean has developed a captivating model of brain structure and evolution
that he calls the triune brain. "We are obliged," he says, "to look at
ourselves and the world through the eyes of three quite different
mentalities..." The human brain "amounts to three interconnected
biological computers," each with "its own special intelligence, its own
subjectivity, its own sense of time and space, its own memory, motor, and
other functions"*.
These three components of the brain MacLean calls the "reptilian complex"
or R-complex, which is, it is supposed, the most primitive, the "limbic system"
and the "neocortex" or cerebral cortex. Further on, Sagan asserts, "If the
thesis that we have evolved is correct, one can suppose that in a certain sense
the R-complex continues to serve the same functions in the human brain that
it served in the dinosaur, and that the limbic cortex generates the
stereotypes of the lions and the lambs" (the italics are mine). Lastly, he
extracts from all that a satisfying conclusion:
MacLean has demonstrated that the R-complex plays an important role in
aggressive behavior, territoriality, ritual and the establishment of
social hierarchies. Despite occasional welcome exceptions, this seems to
me to characterize a great deal of human bureaucratic and political
behavior... It is striking how much of our actual behavior--as
distinguished from what we say and think about it--can be described in
reptilian terms.
With due respect to the reptiles, Lord knows there exist examples of the
latter in our abject political class! Although, if one must be fair, the
examples extend to the political classes throughout the planet: have you
noticed, alert reader, the horrific visage of baby Bush? It is
impossible not to think of a lizard when seeing him.
* The citations are taken from Carl Sagan, "The Dragons of Eden",
Random House, New York, 1977, chap.3.2 October 200340. The telomeres and cancer
THE MAIN difference between the bacteria and the higher organisms is the way in
which their DNA is arranged in their genetic legacy or genome. While the former
in general maintain their genome as circular molecules, the animals and the
plants arrange their genome nuclei as linear groupings of molecules called
chromosomes. Although a linear architecture has its advantages, it also
presents problems; perhaps the most notable may be what to do with the ends.
This problem has at least two faces: on one side, the free ends of the DNA
molecules are notoriously unstable; they degrade chemically and undergo
recombinations with much more frequency than the inner parts. On the other
side, the enzyme responsible for replicating the genome nucleus during the
proliferation of the cell has trouble copying the ends of the DNA molecules
exactly, such that, without special care, the extremes of the molecular
sequence tend to be lost in the copies. To attack those problems, the cells
cover the extremities of their chromosomes with some special structures called
telomeres, which are also protein molecules.
Studies performed in the last ten years (in 1995 the first DNA telomeres
from a human cell were isolated) have provided ever more evidence that the
telomeres and the enzyme or enzymes that produce or maintain them carry out a
fundamental role in cellular aging and in the immortalization of cells, which
very often is associated with cancer.
It appears that, with time, the normal cells of the human body exhaust the
telomeric "caps" that protect the ends of its chromosomes. When they are
completely exhausted, the cell dies. Experiments have been performed with cells
cultivated from tissues of young persons and old, and it has been shown that
the former live much longer than the latter; similarly, it has been verified
that the telomeric structures that cover the ends of the chromosomes of those
cells were considerably larger in the case of young persons. The telomerase or
enzyme that the telomeres form exists in reasonable quantities when the cell is
young. At the end of the time, its activity diminishes until it completely
disappears.
But in our body there exist other cells that we may consider "immortal,"
for once formed they cease to reproduce, as occurs, for example, with the
sexual cells or gametes. These cells always produce telomerase to maintain the
telomeres that cover the ends of the chromosomes intact.
When, whether by the action of a viral agent or of a mutation, a normal
cell begins to replicate itself uncontrollably creating a cancerous tumor, it
begins also to produce large quantities of the enzyme telomerase, which
facilitates the proliferation of the diseased cell. If the production of a
pharmaceutical capable of inhibiting this enzyme were achieved, then the
cancerous cell would cease reproducing and would die. Nothing would happen to
the healthy cells, for they are accustomed to functioning with little or none
of that enzyme ever since adulthood.
So then, to whomever discovers or produces that substance surely the Nobel
Prize will be awarded, and the satisfaction of having cured many, many people.
22 October 200441. Of sequoias and communicationWho upbraids the tree when its fruit falls into the mire?
HÖLDERLIN
ONCE a friend of mine asked me what I should like to be in the event of being
reincarnated. Without thinking much, I answered a tree, specifically a sequoia.
And I was serious about it. I cannot conceive of a more beautiful and imposing
living being than that plant. It is the biggest organism existent upon the
planet (easily doubling the size and tripling the weight of a blue whale), one
of the most long-lived and, best of all, is absolutely independent: it does not
need to draw on another living being to subsist. The air, the sunlight and the
water and the nutrients that it absorbs from the ground suffice for it to grow
and remain living.
When he heard my reply, my friend seemed surprised. "A tree?" he said,
"That seems Buddhist. The trees are static and insensible beings, incapable of
communicating. They are the most distant that can be from a human." I was in
agreement with him that the trees are not exactly a type of life very close to
our own, yet they absolutely seem non-static to me. It is enough to observe
them on a windy day to become captivated by the capricious movements of their
leaves and branches. And not only when there is wind do they move. They are in
continuous movement; what happens is that the speed at which they do it is so
interrupted that we do not manage to capture it. Anyone who has seen a film
edited from pictures at various times of a plant will understand what I say. My
friend understood it, yet insisted: "Alright, they move, but they do not
travel. If they could see, they would always see the same." That too is not
totally true, especially in the case of the sequoias. Even though they cannot
travel, their life is so lengthy that, literally, what moves is the scenery
where they live. After a thousand years or more, they doubtless would have seen
many things, in the event that they could see.
"The scenery they have before them moves, alright," my friend insisted,
"but it does them no good, for they cannot see or feel it." "Of the latter do
not be so sure," I answered, "for I think it is almost necessary that plants
feel the world in some manner; if not, they would have neither evolved nor
survived." "At least concede that they are not capable of communicating." On
this point I had to concede.
I just read an article that made me remember the scene which I related
above. It was that about five years ago it was discovered that plants are
capable of sending signals for help.
It dealt with an ingenious system of defense to protect themselves from
the insects that consume them: to synthesize and secrete large quantities of
some volatile substances that attract the enemies (whether they be parasites or
predators) of their enemies, that is, of the insects that eat them.
Furthermore, the plants can distinguish among the herbivores by the simple
mechanical damage inflicted on the leaves and even can distinguish an
herbivorous insect from another herbivorous animal, for example, from a mammal,
which can have a beneficent effect because it spreads its seeds.
The substances released into the air are called elicitors and they serve
as much to attract the benign herbivores as the enemies of their predators.
Their message can cover great distances. Plants, then, communicate, although it
is clear that their signals are directed to those who represent an advantage
for them, just as with we humans.
I hope my friend reads this article.
4 September 200442. Evolutionary throwbacks
PAUL R. EHRLICH calls evolutionary throwbacks the results, generally
catastrophic, of the apparent unlinking between the speed of biological
evolution and cultural evolution in homo sapiens. From the biological
point of view, we are a creature who has taken hundreds of thousands, perhaps
millions of years to physically become that which we now see. From the cultural
viewpoint, we are social beings who radically changed our relations to
ourselves and with the environment around us in only a few millennia.
Biologically, then, we are equipped to survive in a world that no longer
exists, for our vertiginous cultural evolution has completely transformed it.
Given that nature would take thousands of years to adapt our body to the new
situations we have imposed upon it, the only way to overcome the evolutionary
throwbacks from which we suffer, if we are to succeed in overcoming them, must
be culturally, something which does not seem easy.
We suggest two examples. In the last 20 years, or maybe even further back,
perhaps since the decade of the Seventies, the number of women from developed
countries who have problems with pregnancy has increased almost exponentially.
From the recent biological point of view the woman is capable of having
children at 13 or 14 years of age. It is very probable that primitive women
have their first pregnancy after having had an average of 20 or 25
menstruations. Today, the women in industrialized countries have their first
child at an average age of 28, that is, after having had some 200
menstruations. It is not difficult to suspect that behind that deferment may be
hidden the complaints and complications related to pregnancy suffered by modern
women. Thousands of generations would have to pass (supposing that one would
let those die who had complications, something which, of course, is not going
to happen) for nature to adapt women to be pregnant at around 30 years of age.
The only way then to confront this evolutionary throwback is employing the
resources of modern medicine so that women prepare their organisms as best as
possible for those late pregnancies and for them to be carefully attended
during the pregnancy and the birth.
The second example is more gloomy and more difficult to confront.
Biologically the human being is a gregarious creature, but in small groups. We
are adapted to deal and interact with 100 or 120 persons at the most. That was
the approximate size of the groups of hunter gatherers who spent hundreds of
thousands of years traversing the savannas of Africa in search of food. Today
the human conglomerates consist of hundreds of thousands and up to millions of
individuals. Our nature has not been conditioned to adapt to such magnitudes.
We continue to respond before the masses as did our distant ancestors: with
sullenness and fear; or, when we form part of it, we easily dissolve in the
throng and lose our identity, as Elías Canetti notes, such that we are
capable of committing the most aberrant extravagances or the most infamous acts
of violence.
Thus, that an individual like George W. Bush, whose intellectual
consciousness does not exceed that of Cro-Magnon man and whose biological
nature is identical to this last, should be responsible for the destiny of
millions of human beings not only is an evolutionary throwback but is also an
authentic nightmare, which we can only surpass by applying the same
intelligence that, paradoxically, brought us to a similar contradiction.
I hope that we achieve it before the world is the ultimate victim of our
accelerated cultural evolution.
15 May 200443. Obesity
IN SCIENCE fiction stories that I read during my youth, the man of the future
in general appeared as a tall, thin creature, of refined features and large
extremities, with a bald head, and bulging forehead considerably larger than
that of actual man.
If I were to write a story of that genre now I think I would abandon that
cliché in describing my characters. If one must reflect that which actual
science suggests for the future (something which, anyway, every good writer of
science fiction must do) my characters would have artificially youthful faces,
with that plasticized aspect that the skin acquires after two or three
surgeries; the would not be bald, ultimately, for the implantation of hair
would already be universalized, yet indubitably they would be very obese, of at
least 330 pounds, not universally distributed, admittedly, for a good
percentage of those would be found about the waist and in the buttocks.
In fact, that model of the human being is already a reality in the very
heart of the Empire: a very high percentage of the inhabitants of the North
American midwest sends the needle on the scales far beyond 220 pounds in
weight, and the measure of their waists surpasses by various centimeters the
limit of 35 inches recommended by doctors, as we are assured with all the
seriousness of the case by myriad scientific reports and with much more humor
by the unforgettable José Donoso.
In our own country which, although a proud member of the OECD, is very far
from being developed, in recent times obesity has become a public health
problem, as the sanitary authorities inform us, and prospects for the future
foresee the sharpening of this problem.
We know all that. What is not so well known is that the origin of this
problem is found again in the disconnect between our biological and cultural
evolutions. Said in other words: obesity is the consequence of an evolutionary
throwback. The hominid who would evolve into homo sapiens bet on the
brain as the instrument for its survival. But this organ requires a great deal
of energy to function: although in weight it represents only 2.5 percent of the
organism, it consumes almost 30 percent of its energy. The natural sources of
energy for our body are the fats of meat products and the carbohydrates, most
especially those that are rich in sugars, as is the case with honey.
Nevertheless, for primitive man, whose basic diet consisted of vegetables,
seeds, fruits and roots, those foods were not easy to obtain, so that natural
selection acted in them in such a manner that they would fervently want to eat
meat or honey at the first opportunity available, something that would occur at
most a couple of times per month.
The desire to eat such foods we have inherited from our ancestors, but
in today's world we have access to them every day (at least in the
industrialized nations and in the privileged sectors of the poor countries) and
we consume them daily. If to this is added the fact that a good percentage of
the meat we consume (here one would have to exclude fish) has been treated so
that the animal which produces it fattens (and in consequence the animal who
eats it also fattens) what we achieve, especially beginning at age 35, when the
consumption of energy by the organism diminishes, is to model our shapes by the
aesthetic canons of Rubens or, better still, of Botero.
Once more it was our intelligence that placed us at this crux and to it we
must turn (why not, for example, transfer the excess of nutrition that exists
in rich countries to poor countries?) if we really wish to avoid that in the
future our sad world will be populated by fat despots from Zempoala and Neros
cohabiting with famished pygmies.
22 May 200444. With pain you will give birth to childrenTo the woman he said, I will greatly increase your pains in
childbearing; with pain you will give birth to children...
Genesis, 3-16
WITH THAT curse that God delivered on Eve for having tasted, against the will
of her Creator, the fruit of the Forbidden Tree, the Bible explains to us the
origin of one of the most singular characteristics of the human species: the
tortuous process of childbirth. It seems that no female of any species
undergoes so much effort to put her offspring into the world with so much pain
as do women.
As usually occurs, the theory of evolution offers us a very different yet
no less fascinating explanation of this phenomenon: the effort it requires and
the pain that leads to parturition among women has its origin in an
evolutionary competition between two characteristics of the feminine pelvis:
the width of the birth canal and the narrowness of the hip; or expressed
otherwise, between the capacity of a woman to give birth and the capacity of
the same to walk upright; a competition in which natural selection left things
exactly in the perfect middle.
In a book that I finished translating, On Fertile Ground: a natural
history of human reproduction, its author, Peter T. Ellison, dedicates a
section to the matter of parturition in humans. Our most remote ancestors
walked on four legs, like the rest of the mammals. Doubtless the females of
those distant species had no difficulties in giving birth, like the rest of the
mammals, since their birth canal (the passage through which the fetus navigates
the maternal pelvis) was sufficiently loose so that the head of the child (the
most voluminous part of the body at birth) could pass through it without
difficulty.
The problems began when our antecedents began to stand upright. To walk on
two extremities it is necessary that they be as close as possible with the goal
of better enabling keeping one's balance and allowing more rapid travel. In
fact, we accomplish walking as if we followed a straight line extended just
through the middle of our body, in the exact center between the two legs. Thus,
the narrower are the hips (the plates of the ilium of the large pelvis) the
better one walks upon two extremities; yet, at the same time, the narrower the
hips so also is the birth canal, which makes parturition considerably more
difficult, if we take into account that, as we evolved into bipeds, we became
ever more intelligent and, ultimately, had bigger heads.
The dilemma of our female hominid ancestors centered in that, if the hips
were broad, they gave birth without much difficulty, but were slow and torpid
of movement, and that in a world of hunter-gatherers signified a notable
disadvantage. On the other hand, those who were narrow of hip moved with
greater rapidity and assurance (as occurs with modern runners) but they ran the
risk of dying during childbirth if not able to pass a child with a head a
little bigger than average.
As we said at the beginning, natural selection took matters to the middle
ground: modern woman has hips sufficiently narrow to move without difficulty
and the birth canal sufficiently wide so that a child can barely traverse it.
That explains the complicated and painful human parturition; at least until
recently, for medicine is charged with sending the divine curse into memory.
16 July 200545. Genes and nourishment
WE RETURN to the theory of evolutionary throwbacks of the American biologist
Paul R. Ehrlich, according to whom our organism, in response to hundreds of
thousands of years of biological evolution, is adapted to survive in an
environment similar to that in which our ancestors from the Stone Age lived.
Cultural evolution in our species (in reality, a consequence of our biological
evolution) has dramatically changed the habitat in which the major part of the
inhabitants of the planet live, and that has resulted in that frequently the
constitution of our organism enters into conflict with the environmental
pressures that it has to confront. Thus, for example, we are genetically
programmed to crave meat and honey, because they were the principal sources of
energy available to our ancestors, and were difficult to obtain; yet, for that
last reason, we are also programmed to ingest meat and honey at most a few
times per month. However today many humans can consume those foods every time
the desire moves them (which is quite frequent) which leads to an excess of
fats in their organisms that frequently culminates in obesity, diabetes or
heart attacks.
The problem with these evolutionary throwbacks is that our cultural
evolution occurs much more rapidly than the biological, which, according to
Darwin's calculations, requires dozens of thousands of generations to
accomplish significant changes in a species.
Although perhaps biological evolution is not as slow as the English genius
thought. The U.S. scientist of Lebanese origin Gary Paul Nabhan published a
curious book titled Why Some Like It Hot: food, genes and cultural
diversity (FCE, Mexico City, 2006) that deals with the relation existing
between our genetic legacy and our alimentary habits.
The interactions that occur between the genes, the foodstuffs and the
pressures of the environment (that can be diseases, parasites and pathogenic
agents) are very complex and surprisingly revealing. In the majority of cases
we think that the biology dictates the cultural food preferences; that
is, natural selection of certain genetic characteristics tends to annul the
cultural traces that do not have an immediate value for survival. But the
biological-cultural nexus can invert itself. See, for instance, the case of the
ingestion of milk. Homo sapiens was genetically programmed to stop
tolerating milk a little after weaning; in fact, this last is because the baby
stops producing lactase, an enzyme that metabolizes lactose, the principal
sugar in the milk which, if it is not metabolized, causes inflammation and
cramps in the intestine. Until little more than 12,000 years ago (when
livestock farming began) very few homo sapiens adults could tolerate
lactose, except only those who had a dominant gene capable of producing the
enzyme.
In populations devoted to agriculture and livestock farming, that gene,
until then recessive in almost all members, became dominant in a matter of only
5,000 years.
Nabhan says:
Although we do not know the historical details of the road which was
followed, the final product--the tolerance of lactose through genetic
control--suggests that the cultural practices of drinking raw milk and
lactic products generated the selective pressure for the genetic change.
Therefore, it is possible to go from the culture to the biology.
And in a relatively short time. We shall see other examples.
24 September 200546. Why are chiles hot?For my beloved and admired uncle,
Adolfo Sánchez Vázquez, on his 90th birthday
THE CONSUMPTION of the very Mexican chile is a good example of how, in some
cases, cultural evolution has guided the biological evolution of homo
sapiens. The solanaceae is characterized as containing a unique chemical
substance which serves to protect it from its antagonist, capsaisin. Human
beings
have in our body a pain sensor known as VR1 that "reads" the presence of
capsaisin in the chile just as if reading an increment in temperature. It also
provokes a painful sensation of tickling and burning when it is exposed to
other substances similar to capsaisin which are found in black peppers and
ginger. These irritants, just like the presence of fire itself, activate a
dramatic response in our sensory nerve terminals and in those of all the
mammals studied to date.
Thanks to this substance, the chile plants avoid their fruits being food
for the mammals, for if they were to ingest them, the probability of the
enclosed seeds surviving would be practically nil, since they would be damaged
by the action of the teeth or by the gastric juices of the mammal who might eat
them. Birds, on the other hand, do not damage the chile seeds when they eat
them, such that, whether they fall from the heights of a tree while being
consumed, or later, when defecated, the seeds can germinate in fertile soil.
This is why the chiles sting almost no birds when ingested: they are its
transmission agents.
Thus, with all certainty our most remote ancestors did not eat chiles, as
all other mammals continue not to do; although it is probable that since
several dozen thousands of years ago from time to time humans ingested the
chile for medicinal purposes, and perhaps the smoke of the burned plant was
used to scare away animals of prey.
But beginning with the establishment of agriculture and livestock farming,
around 12,000 years ago, our antecedents began to eat meat with much more
frequency than before. It was in that era when some wise counselor must have
discovered that chiles are splendid agents for disinfecting and preserving
meat, something of vital importance especially in the warm regions of the
world, where meat decomposes rapidly.
In this way, the consumption of the Solanaceae has become ever more
frequent, above all in the tropical regions of the planet. And this
acclimatization has been accompanied by a transformation in the genetic legacy
of the inhabitants of these regions: several genes in our organism have to do
directly with the capacity of the palate to tolerate irritating substances.
Hundreds of thousands of years ago, the genes whose action allows accepting the
burning were recessive in the enormous majority of human beings. Today however,
in wide swaths of the world population (again, especially among those who live
in warm regions) those genes are dominant. And, once more, that occurred in a
very short time (speaking, here, in Darwinian terms).
Blessed genes that prevent rejection in physiological terms, but which
permit us to delight ourselves with one of the most delicious and strange
plants that nature has implanted in mother earth.
10 October 200547. Why do we mistreat women?
I WRITE these lines on the day that the U.N. has dedicated to women throughout
the world and cannot resist offering some reflections in turn on those beings
who comprise a little more than half of humanity, or rather turning to those
other beings who comprise a little less than half the human genus and who for
millennia have sought to produce a rather unhappy life upon their sisters of
the opposite sex.
Why have men, at least throughout history (since of prehistory we know
very little in this respect) treated our species companions as inferior beings?
Why even today, when one supposes that we have developed an ethic that tells us
that women and men have the same rights and obligations, that there should
exist a plane of absolute equality in our relations, that, finally, we have
recognized that they are, like ourselves, human beings, do we continue in
myriad instances to deprecate, mistreat, violate and even assassinate them? Is
it perhaps because, to utilize the concept of Paul R. Ehrlich, many of us
suffer from an evolutionary throwback in relation to women? Or might it
be that we deal with a cultural throwback?
It is documented that in some primitive communities (like the Eskimos, for
example) female infanticide was practiced. So that an eskimo woman might have
the right to live, there must have been born before her one or more male
siblings. That practice (which continues to occur in clandestine fashion in
some regions of India and China) could be explained in terms of survival: the
couple needed one or more male children to help the father in the arduous labor
of the hunt so as to assure the survival of a daughter who could not perform
such hard tasks. Within the species itself there was a selective advantage: the
male was considered more useful and, ultimately, his natality was favored.
Nevertheless, despite such a dubious effort, the number of men and women in
those communities was equalized, which indicate that, in the final account, the
women were more resistant and long-lived. If, as is probable, the prehistoric
communities of hunters and gatherers participated in the practice, perhaps
there may be a sensation of superiority in our genes shaped over hundreds of
thousands of years to act as if we actually were. Thus then, if we suffer such
an evolutionary throwback, it would lead us to consider inferior, and so
distinguish, those whom we consider the weakest of our species, the female, and
to act accordingly.
Yet things could be exactly the reverse: perhaps during all of prehistory
the arrangement between men and women was as completely egalitarian as we now
desire and the problems began when we discovered agriculture and changed our
habits of life. It is almost certain that there were women who performed the
feat of domesticating plants, since they basically specialized in gathering
while the men were absorbed in hunting and fishing. Maybe the enormous power
that their discovery gave them permitted them to dominate the men (in many
ancient cultures we speak of a mythical matriarchy) and they subjugated them
over hundreds of years until the latter rebelled and imposed by force upon the
cleverness of the women. Perhaps we still have not forgiven them, so that the
mistreatment to which we subject them is a case of a cultural throwback. If
only we knew.
12 March 200548. The beetle collector
CHARLES ROBERT DARWIN was born in Shrewsbury, England, on February 12th, 1809.
The history of the West records that year as one of the last in which
continental Europe was subject to the power of the Napoleonic army and, above
all, captivated by the fresh ideas of liberty, equality and fraternity that
accompanied the French troops in their saga. Ideas that served, especially the
first, to sustain and give a definitive victory to an economic order that since
the Renaissance had struggled to be imposed.
Darwin's grandfather was a famous doctor. In his work Zoonomia, or, the
Laws of Organic Life, he formulated the idea of the evolution of species,
that is, of the development of the organic world, although his notion was based
upon Lamarck's theory that species changed deliberately to surpass themselves.
The young Charles was an enthusiastic reader of his grandfather. Perhaps that
led him to study medicine at the University of Edinburgh. Soon he lost interest
in those studies; it was not curing people that interested the restless youth,
but instead to understand the fascinating world of living things.
After the failure in Edinburgh, his father sent him to Christ's College in
Cambridge with the hope that his son would become a clergyman. Charles' curious
and inquisitive spirit adapted even less to the arid pronouncements (at least
to him) of his theology teachers. Loyal to his love of nature and for living
things, he dedicated the greater part of the time he lived in Cambridge to
collecting beetles and to reading Humboldt's tales about the unsuspected
variety of species that populate the earth.
In 1831 (a year in which the Napoleonic exploits were only a memory, for
long before the old despotisms of the old European ruling houses had been
restored, but now accompanied by the favorite son of liberalism, capital and
its market, that was en route to becoming the dominant system) Darwin accepts
the invitation of Henslow, his friend and professor, and embarks, as an on
board naturalist with the sailing ship Beagle upon a voyage that had the
objective of making a map of the Tierra del Fuego, and to return later to the
British Isles via the route of the West Indies.
Darwin's voyage on the Beagle is too well known to dwell upon it.
It suffices to say that his encounter with the highly varied and exotic
Brazilian flora and fauna and with the singular development of the species of
birds and reptiles he observed in the Galapagos was what led him to establish
the idea that species were not created in the shapes they now have, and that
those which disappear (there already were in Darwinian times sufficient fossil
remains so as to comprehend that species had existed which now are extinct)
were not replaced through new creation, as Curvier thought in his curious
theory of catastrophes, but that they were ancestors of modern ones. The
species, thought Darwin, are transformed, in a very slow and eventful process,
in adaptation to their medium. Those which accomplish that, survive; those
which cannot do so, disappear. It concerns a process in which the divine hand
does not participate, that is, God did not create the species (and among them,
of course, man); it concerns a process of natural selection, to use
Darwin's term.
In the Galapagos he conceived the key ideas of evolution, but delayed more
than 20 years in publishing his theory, for he was conscious of the bitter
polemic that they would cause.
At last, in 1859 (a year in which capitalist liberalism ferociously ruled
in the western world) he publishes The origin of species, a fundamental
work of modern biology that even today, a century and a half later, causes
resentments, paradoxically in the countries assumed to be champions of
modernity.
27 August 200549. "Intelligent design"I
WE SAID that in 1859, during the boom of capitalist liberalism, Charles Darwin
published The origin of species, where he presented his theory of the
evolution of living beings. The importance of this theory in the confines of
the biological sciences is equivalent to that of the quantum theory in the
field of physics. In it the reply was found to innumerable questions that the
discoveries of the naturalists had posed. Those strange fossils of beings that
seemed absolutely unlike those seen in the real world were nothing more than
the remote antecedents of those, creatures whose descendants little by little
were transformed, pressured by the environment in which they lived, until
finally becoming a contemporary species; or else, until disappearing, as
occurred with the dinosaurs.
Darwin's theory proposed that only those individuals survived who were
best adapted to their medium. This pleased some philosophers (or perhaps it
would be better to call them organic intellectuals) of the era when it was
proposed, who drew a brief version of the principle of natural selection: for
them, the strongest survived (who are not necessarily the most
adapted), an idea that was blended with the thousand marvels of the
dogma of the free market.
Yet from the theory of evolution there emerged one consequence that
absolutely did not please the organic intellectuals of the time and much less
the religious authorities (whether Catholics or Protestants): man himself, as
the natural creature that he is, also is the result of a long evolutionary
process. We were not always as we are and we were not always here. Before we
could exist, other beings similar to ourselves perambulated throughout the
world, and even before them, other beings even more remote who walked upright
lived on this planet, and even before them there were some primates from whose
descendants come the simians, our ancestors who walked upright, and humans. We
are, then, distant cousins, if not direct descendants, of the gorillas and the
chimpanzees.
One can imagine the scandal that this idea provoked in the good
consciences of the late 19th century. Things got worse when, in 1892, the
adventurer scientist Eugène DuBois proclaimed to the world the discovery
of what he considered the remains of the "missing link," which he baptized as
Java Man. Today we know that that refers to an individual of the species
homo erectus (one of those ancestors of ours who walked on his rear
quarters) who lived on that island about 800,000 to one million years ago.
It took many years for humanity to digest this draft. Still in the decade
of the Twenties of the past century there was a celebrated case in a city in
the mid-west of the United States where the judge ruled that the teaching of
evolution went counter to the principles and values of the "American" nation,
such that he ordered that its teaching be superseded in the state public
schools and that instead that the teaching of the Biblical theory on the origin
of man be imparted. On that occasion Adam defeated Java Man.
And even today, with a foothold in the third Millennium, under the
implacable shadow of an economic neoliberalism perhaps even more cruel than its
19th century grandparent, there are those who refuse to accept that we are
cousins of the chimpanzees. Since by now not even they can swallow the Biblical
version of human origins, they now propose instead of evolution a supremely
absurd idea which, paradoxically, they have christened as "intelligent design."
The things that one sees...
3 September 2005IIThat which the city most degrades in man is the
intelligence, for it either traps him in vulgarity,
or plunges him into ignorance.
EÇA DE QUEIROZ
Confronted with the impossibility of sustaining the Biblical hypothesis of
creation, the enemies of Darwin's theory, those who think, with no scientific
basis, that the human species does not belong to the great family of
terrestrial fauna, propose a delirious theory as an alternative to natural
selection: the famous "intelligent design." They concede to their critics that
things occurred more or a little less than the theory of evolution describes,
but in the key moments of the life saga (for example, when the amino acids that
were formed in the primordial sea which covered the earth united to form the
first protein capable of reproducing itself) a superior intelligence also
intervened in the process. Of course, the same intelligence also intervened
billions of years later, when the genes of certain primates mutated to give
rise to the family of hominids, whose latest descendents (that is to say,
ourselves) would propound and be conscious of that intelligence which created
them and made them unique. Tell me if the ways of the Lord are not tortuous!
The Bible of those new prophets (many of them certainly Adventists) could begin
as follows:
About 12 billion years ago God created the universe; seven billion years
later created the Earth; one billion years later he created the amino acids
and ordered them to grow and procreate, and some millions of years ago
chose a primate and blessed it, he and his descendents, with the spark of
his intelligence. Satisfied with his work, he rested one Sunday.
The matter does not exceed being a bad joke were it not that an
influential pressure group in the circles of economic and political power in
the United States is fighting an arduous battle for the intelligent design
theory to be taught in the schools of the country, for the students to be those
who decide which of the two is correct. As if science were a matter of options!
To make scientific knowledge banal never yields good results, except to
the charlatans who cultivate these supposed "scientific theories" (remember the
case of Dianetics) and can instead have fateful consequences. Recall, for
example, the barbarities committed by the Nazis holding to the "theory of the
Aryan race." Perhaps that which just happened with the terrible Katrina
might be another good example of what can happen when scientific knowledge is
scorned.
10 September 200550. Sir Alexander Fleming
FEW people have the satisfaction of knowing that their work has served to save
an infinitude of human lives; or rather, to prolong many lives through
the cure of their bodies, for, as Cioran would say, human life cannot be saved.
One of them is the Scottish doctor Alexander Fleming who, as is well
known, discovered penicillin. What is not as well known is the life of this
scientist who received the Nobel Prize in 1945 and who died, ten years later,
the 11th of March of 1955, in the midst of great homages and with the title of
Sir on his name.
He was born in Ayrshire, Scotland, the 6th of August in 1881. Of humble
origins, during his youth he had to make a living working in the wharves of
London to be able to pay for his high school studies. Given his excellent
unfolding as a student, in 1902 he obtained a scholarship that permitted him to
wholly dedicate himself to his career at the University of London, as an intern
in the medical school of the Santa María Hospital.
Upon completing his studies with excellent grades he decided to continue
in the university milieu to devote himself to research, without it mattering to
him that in those days (as in our own) research and academia were much more
poorly paid than the practice of the medical profession.
Since he had been a student he had developed a live interest in
bacteriology, especially in chemotherapy for illnesses, and he never strayed
from this field throughout his long professional life. He formed a team with
the doctor Almoth Wright, who was known in the scientific world for his vaccine
to prevent typhoid fever. In those days, persons of science knew that the
bloodstream contained cells called white globules or leukocytes that combat
harmful microbes. Wright and Fleming sought the pharmacological agents that
would augment the efficacy of the leukocytes in their natural battle against
the invading germs.
The first World War caught them in full activity. Both were transferred to
the European fronts where they prolonged innumerable lives with the vaccine
against typhoid. The conflagration ended, in 1918, Fleming returned to the
Hospital of Santa María and one year later they named him a professor of
the Royal College of Surgeons.
In 1922 he had his first success as a researcher. He took advantage of a
strong cold that he suffered to study the effects of his nasal secretion upon a
microbe culture. He observed that in the spots where the drops of nasal fluid
fell the microbes died. Encouraged, he tried with other bodily secretions
(tears, saliva, etc.) and found that in every case the microbes died. He called
the substance that was in those fluids lysozine, which had antibiotic
properties and that turned out to be an enzyme. He reported his discovery,
though he did not have too much success, for he was unable to isolate the pure
enzyme.
Six years later, in 1928, when he went to discard a decomposed
staphylococcus culture, he noticed that within the culture there were some
mold spots and that, because of this, around each spot the bacteria colony had
disappeared. He isolated the mold and identified it with that called
Penicillium notatum, relating it strictly to that found in old bread,
and decided that that mold was responsible for the bacterial growth to have
been inhibited.
He published his work, but again, due to the lack of economic resources
and the necessary infrastructure, he could not isolate the active principle in
the mold.
Ten years had to pass in which the English chemists Florey and Chain
devoted themselves to isolating that substance, something they achieved in
1941, during the second World War. Since then penicillin may have prolonged as
many or more lives than Nazism extinguished.
Despite the enormous fame that his discovery procured for him, Fleming
never ceased being a humble and simple person. He used to say: "I didn't do
anything, nature makes the penicillin, I only found it."
11 March 200551. Connected vessels
IN 1905, H. G. WELLS (whom the Encyclopedia Britannica describes as a
"novelist, sociologist, historian, and utopian") published The modern
utopia, a work that completed a vast cycle of novels, begun ten years
previously with the publication of The time machine (1895), which gained
him universal recognition as a pioneer of the science fiction genre.
That same year, at his parents' initiative, Frederick Grant Banting
enrolled in a seminary, a bright youth of 14 years and a native of Allison,
Canada. He lasted briefly there. He was able to convince the pious fathers that
his vocation was to serve not God but prefer instead the neighbor: he enrolled
in Victoria College, where he received the title of doctor in 1916.
Wells, born in Kent, England in 1866, the son of a professional cricket
player and a housemaid, obtained, thanks to his remarkable intelligence, a
scholarship to study at the Royal College of Science in South Kensington, and
he graduated later with full honors from the University of London. Recently
graduated, in 1888 he began his activity as a science teacher in the same
university. In 1893, a severe health crisis moved him away from the podium and
towards journalistic and literary work, an activity that suited him better,
being quieter, with his illness, the terrible diabetes mellitus.
One year after H. G. Wells graduated, in 1889, the German doctors Mening
and Minkowski published an article in which they expounded their suspicions
that the pancreas is related to diabetes, because when that organ is removed
symptoms developed resembling those of that infirmity. They attempted without
success to isolate the hormone existing in the pancreas responsible for
regulating the sugar content in the blood.
The young Canadian doctor read the article by the Germans and envisioned
an experimental project to isolate that substance. Assisted by a young
physiologist and biochemist named Charles Best and with the cooperation of
various dogs who endured endless operations on the pancreas, they succeeded at
last, in 1921, to isolate this hormone that they first named isletine
(because of the form of isolate the cellular groups have which are separated);
a little later they changed the name to the more elegant and Cervantine
insulin and they tested its effect on terminal diabetics. The result was
a true miracle: those condemned to die returned to life through receiving an
injection of the hormone. But they discovered that one dose was insufficient;
the diabetic had to administer herself insulin every day to control the sugar
in the blood.
In 1921, H. G. Wells was a writer and thinker recognized throughout the
world, and also quite controversial, for his social ideas were profoundly
critical and advanced. He put the capitalist apparatus as much as the novel
socialist project of the brand-new Soviet Union into the balance. He expressed
his ideas in a surprising number of books and articles, more surprising still
if it is considered that the writer's health worsened day by day. In that age,
his doctors could only promise him few years of life.
In 1922, Banting and Best had managed to produce great quantities of
insulin derived from the pancreas of the beasts sacrificed on the roads. Their
miraculous cure expanded rapidly throughout the entire world.
In 1923, H. G. Wells received the insulin treatment. Thanks to it, he
lived to be 90 years old; he died in 1946. The second World War and the dawning
of the atomic age embittered his spirit. His last works reflect a loss of faith
in utopia.
In 1923, Banting and McLeod (the individual who financed the project)
received the Nobel Prize. Very disturbed that his comrade Charles Best was not
included in the award, the Canadian doctor was on the point of refusing it. He
accepted it in the end, although half of the money that he received he shared
with Best. On February 21st, 1941 the bomber in which he was riding to Great
Britain crashed in Newfoundland.
12 June 200552. Katrina
IN 1718, THE governor of Louisiana, Jean-Baptiste Le Moyne, founded the village
of New Orleans, named this in honor of the duke of Orleans, the regent of the
French reign in those times. He chose a promontory on the banks of the
Mississippi, around 100 yards from its mouth, between the Bayou St. John and
the river, to erect the settlement. Not very far from there, on the other bank
of the river, the famous Barataria Preserve is found, where later the not less
famous pirates John and Pierre Lafitte would find a formidable hideout with
easy access to the seas of the Gulf of Mexico. Four years later, New Orleans
would become the capital of the French colony.
The city, located in an unhealthy region of swamps and bayous, and some
meters below sea level in greater part, in a region with recurrent tropical
storms, presented a formidable challenge to make it habitable; yet its
strategic position as the gateway to the Mississippi river and as landing site
for voyages from Cuba, the Gulf of Mexico and the Caribbean meant that the
effort was worth the price.
Thus, the history of this city is full of anecdotes about terrible
inundations (the first occurred in 1719, only one year after the founding),
epidemics of cholera and malaria, and feats of engineering and urbanism (in
1940 it claimed to be the most hygienic city relying on the most advanced
drainage system in the world) that caused it to be one of the most beautiful
and important cities of the United States.
In the second decade of the past century a system of embankments was
constructed to forever rescue the city from the threat of flooding that could
occur from the overflowing of the Mississippi river or Lake Pontchartain which
adjoin it. The engineers who performed the formidable work calculated that they
could resist the fury of a hurricane up to grade three. They discounted the
possibility of an event of greater intensity for, according to their
calculations, the probability that a grade four or five hurricane would hit the
town was less than 0.6 percent; this is, an event of such magnitude was not
expected for at least 300 years.
But those engineers did not take global warming into their calculations.
Today it is known that, due to this phenomenon, the incidence of hurricanes and
their intensity have been increasing notably over the last 20 years. It does
not require a doctor in the matter to understand that, the higher the
temperature gradient may be, that is, the warmer the surface of the marine
water and the colder that of the atmosphere, the more velocity the winds will
have. We all know this, except for the clique which governs in Washington and
their "scientific" advisers. Repeatedly they have refused to subscribe to the
Kyoto accords to attenuate global warming, offering as pretext that "there is
insufficient scientific evidence" that this phenomenon is responsible for the
climatic alterations we endure with ever greater frequency. Surely they
consulted their ineffable "intelligent design" to arrive at that conclusion.
We hope that the horror which Katrina has meant leads the people of
the United States to disassociate once and for all from the rascals who govern
them.
17 September 2005ECOLOGY53. Compulsive predatorsI
MORE OR A little less, a celebrated cartoon from the unforgettable Abel Quesada
recounted that Saint Peter, after having seen God create Mexico and deposit
therein an abundance of property, beauty and riches, asked him: "Is that not to
give too much to just one country, my Lord?" God, with his infinite justice,
had this reply: "To compensate for so many gifts, Peter, I shall put Mexicans
on that land."
This acid anecdote came to my mind while I leafed through a book that fell
into my hands as a Christmas present. We refer to Animales de México en
peligro de extinción, by Gerardo Ceballos and Fulvio Eccardi, edited
by the Alejo Peralta Foundation. Magnificently illustrated and written in a
language accessible to laypersons in the matter, the work offers a wide
panorama of the actual situation regarding biodiversity in our country.
It is well known that Mexico occupies a leading place on the world scale
if one is referring to biological diversity. In fact, one of every ten known
species live in our territories; after Brazil and Indonesia, Mexico occupies
third place in biodiversity. We are the country of the world that has the most
variety in amphibians and reptiles, the third in mammals, the fourth in higher
plants, and 11th in birds.
Nevertheless, Ceballos and Eccardi also remind us that we are one of the
countries whose population has grown most rapidly in the last century: there
were eight million persons in Mexico at the beginning of the 20th century; by
1940 we were already 20 millions and at the dawn of the new millennium we
account for 100 millions. A growth of 1,100 percent in only 100 years.
There has been a direct correlation between the population growth and the
decrease in biodiversity in our country. Simply in the case of vertebrate
animals, the last century has seen disappear from our lands, lakes, seas and
skies, 22 species of fish (18 of them endemic, that is, which live exclusively
in our territories), 11 species of birds (five of them endemic) and another sum
of mammals (seven of them endemic) among which are included the Mexican wolf,
the grey bear and the sea otter.
In the case of the fishes, all those species disappeared due to the
modification or destruction of their habitat; half of the birds disappeared for
the same reason and the other half were victims of overexploitation by hunters.
In the case of the mammals, intensive hunting or the introduction of foreign
species into their environment was the cause of their extinction.
Summarizing, the direct or indirect reason for those 44 species having
been extinguished in our nation has been humanity. That which took nature three
billion years to create, we have eliminated in less than a century.
And the future looks gloomy for many other species. Who knows how much
longer we may see, to cite only a few, the formidable jaguar on our plains,
the pronghorn antelope and the bighorn sheep traveling through the rugged
mountains of the Baja California desert, the royal eagle, mythic Nagual
of the Mexicans, and the gorgeous flamingos who paint the inlets of Quintana
Roo pink, or the enigmatic manatees, who once were mistaken for Sirens and who
have ever less room to live.
It may be that the origin of this predatory conduct characterizing us is
to be found in ignorance. Very few understand that by risking the planet's
biodiversity, we put in greater risk the survival of our own species.
11 December 2004II
In the Museum of Natural History in Washington, D. C., repose the desiccated
remains of Martha, who died in the Cincinnati Zoo in 1914. Martha was the last
specimen of the passenger pigeon, a bird native to the United States.
A century previously various colonies, each of around one billion of these
birds, sliced through the airs of the North American midwest. This unique
species had discovered its form of survival in massive reproduction and in the
constant moving of the spots where they deposited their eggs, hatched and
nurtured their offspring. Although they had many natural enemies (squirrels,
hawks, foxes, among others) the vastness of their colonies were their defense
against them. Given that they continually moved, their predators never managed
to form sufficiently large groups to threaten the passenger pigeon species with
extinction.
In the second half of the 19th century a predator appeared in the region
where passenger pigeons lived against which they had no defense. After the
Civil War, with access to the midwest of the United States open thanks to
trains, large groups of human beings, accompanied by their indispensable rifles
and shotguns, established themselves in the region. As their misfortune, this
type of bird made an excellent serving for the human palate such that, in
addition to being massively consumed by the new inhabitants, they became one of
the prime export articles for the zone: wagons replete with sacrificed birds
were dispatched continuously to the east coast for their consumption. It is
said that, in 1878, a single hunter embarked from Michigan with three million
birds destined for those markets. By 1889 passenger pigeons were extinct in
that state. 25 years later Martha died, the last exemplar of the species.
Curiously, the extermination of the passenger pigeons was an important
factor in the encouragement of the spread of an infectious disease, very
damaging for human beings, known as Lyme's disease. The pigeons formed flocks
of billions and ate beechnuts and acorns; with the extinction of these birds,
there was more of this food for a type of rodent known as the deer mouse, which
allowed the population of those mammals to flourish. This made the environment
more favorable for some ticks parasitical to the mice that transmitted the
spirochete which caused Lyme's disease. The commercial hunters, by provoking
the extinction of the passenger pigeon, made the environment more favorable for
the mice, the ticks and the spirochetes, and more unfavorable for homo
sapiens. Martha's revenge?
This is only one example among thousands of the risks that we run by
imprudently exterminating a species. The links that comprise the chain of life
on the planet are much more fragile than we suspect.
Additionally, the unhappy fortune of those birds reminds us that recourse
to forming massive populations to guarantee survival of a species almost always
ends badly. Especially during the last two centuries the human population has
grown so enormously that it reminds us of those birds: it seems that our
guarantee of survival is in the great quantity of individuals we represent.
However, the experience of the passenger pigeons, of the trilobites and of many
other species demonstrates that, though it seems paradoxical, the more of us
there are, the more risk we run of extinction. We should share space with all
the living beings who accompany us on Earth. Let us proceed, then, doing less.
18 December 200454. The isle of Tikopia
THE ISLANDS of Polynesia are a geographic location suitable for studying the
elaboration of human societies in a situation of almost total isolation. At the
same time, the study of these societies sheds a great deal of light on the
mechanisms of individuals interacting with their environment and the cultural
values that emerge from such interactions.
On those islands there were communities, like there were on the Easter
Islands, that after having achieved a great cultural flowering, accompanied by
a notable increase in the population, began to degrade the surrounding natural
resources and subsequently followed a gloomy path of decadence full of
intertribal wars and even cannibalism, becoming almost extinct.
However there were other communities that managed to harmonize their
customs with the environment they lived in, which allowed them to live together
in peace and relative abundance over many years.
This is the situation with Tikopia, a solitary promontory that appears in
the Pacific Ocean. During historic times, the small island maintained a
population density almost five times greater than the average for other
islands, although it was smaller in absolute terms. The islanders achieved this
developing an intensive arboriculture system, which covered the island with a
variety of gardens of economically valuable plants, with fruit and nut trees
which gave shade to the sweet potatoes and other plantings. On the few spots
where there were no trees, they had planted fields of sweet potatoes and other
nourishing vegetable species. Fishing, carefully regulated through custom, was
the principal source of proteins. Furthermore, and this may be the most
important, the inhabitants of Tikopia utilized birth control mechanisms:
celibacy, contraception, abortion, infanticide, almost suicidal sea excursions
the young men (who were encouraged to embark on very dangerous expeditions)
and, in some cases, expulsion of segments of the population. Indeed, zero
population growth was incorporated as a central element of Tikopia's religion.
The priests of that isle, as opposed to our Catholic prelates of today,
inveighed for the virtues of birth control and warned of the severe punishment
that the gods had prepared for whoever might have more than two children.
Homo sapiens is the only species that practices birth control and
anti-conception. And they have done so for a long time, perhaps ever since
their biological and cultural evolution resulted in such a way that they had no
other natural enemy than themselves.
The example of the inhabitants of the small island of Tikopia is very
instructive: the act of avoiding having a prolix number was not an affront in
the eyes of the gods, but simply was a mechanism to ensure that their offspring
would continue to have space to live.
And today, what occurred in that minuscule island happens on a world
scale: if we want our children to have space to live, it is imperative that we
be fewer or the conditions will not exist on the planet for the survival of our
species.
Luckily, it seems that the great majority of the Catholic flock
understands it this way. Although their Ultramontane hierarchy is still very
far from doing so. Yet, in the final account, that is not excessively serious:
the pledge of chastity to which the priests are subjected makes them valiant
allies of birth control. They do not need the next day pill. The rest of us, we
do need it.
31 January 200455. The mind that is anxious about the future is miserable
THE OTHER night there fell where I live (Jiutepec, a community adjacent to
Cuernavaca) a downpour of rain and hail such as I had never seen in the almost
25 years I have been living in Morelos. Luckily, it lasted no more than 15
minutes in all its intensity; had it gone on for several hours, surely my
family and I would be in the category of refugees sleeping in the gymnasium of
some public school in Cuernavaca.
The fury and unexpectedness of the tempest brought to my mind (as always
occurs when I read, always more often, news concerning climatic disasters) the
pavilion that Japan constructed at the 2000 Hanover Expo, which I had the
opportunity of visiting. I transcribe some notes I took about that place:
The first thing that calls for attention are the materials it is made
from: wood and paper. It was made that way with the intention of
commemorating the establishment of the paper industry in Japan, according
to my understanding after visiting it. From outside the building resembles
an enormous caterpillar, white like the paper from which it is made: an
ingenious and graceful interior structure of wood crossbeams form three
grand cupolas covered in the said material which intersect over
approximately a quarter part of their surface. At night, the structure is
lighted from within, which results in, seen from the outside and thanks to
the translucence of the paper, its appearance being spectacular, even more
beautiful than by day. One enters it from one of its ends. All that can be
seen from there is the wood passageway which guides the visitor; a story
of white paper; the impressive vaults with their visible structure and
their dazzling whiteness, and precisely in the center, like an island, a
structure in the shape of a hemisphere, of some ten feet in radius,
completely covered with natural vegetation. Within this island various
television screens announce the motto of the pavilion, "The wisdom of
nature," and announce to us the theme to be covered: the fearsome carbon
dioxide. Afterwards, the wood walkway continues until the opposite end
where there is a stairway that descends to the main floor. There we find
five "islands" that also are paper hemispheres. In the middle of each one,
by means of computer and television screens, the visitor is given a
veritable survey of CO2, which culminates with the chilling message that,
if urgent and energetic action is not taken to diminish the annual
production rhythm of that substance, the planet will be condemned to
overheat with all the catastrophic consequences that that implies.
The great investment that this country made to construct the pavilion
and the vehemence of its name, a desperate pleading to learn the wisdom of
nature, and almost completely forgetting their legendary achievements in
high technology, makes me think that surely the planet is much more
contaminated with CO2 than is admitted, and that the terrifying
superheating does not threaten Earth in the future, but instead is
occurring now. Thus the sudden and abnormal climate changes that the
planet has had during recent years are explained. "The mind that is
anxious about the future is miserable," Seneca said, and I fear very much
(after visiting the magnificent Japanese pavilion) that we was right.
The warning was constructed only four years ago. Little has been advanced
since then; at least insofar as the measures that have (not) been taken to
avert the disaster: neither the United States nor Russia has deigned to
subscribe to the Kyoto accords. What has advanced on the other hand, and by
giant steps, is the furious response that nature is providing to its
overheating. Will we detain it in time?
4 June 200456. Our perception of time
THE EVOLUTIONARY biologist Paul R. Ehrlich points out that one of the causes
which leads us to destroy our environment with such enviable zeal is the
perception we have of time. The average duration of a human life is
extraordinarily short if compared with the length of many natural events,
especially those related to evolution. The simple and persistent process of
erosion of the Colorado River, for example, took around 20 million years, that
is, 20,000 millennia, to excavate the marvelous Grand Canyon. Our genes are not
programmed to conceive, not even so to speak to intuit, such durations of time.
We are only capable of taking note of the changes that occur almost
instantaneously. Would anyone smoke, for example, if they knew that on the next
day after consuming a carton they would awake with acute emphysema. Of course
not. Yet, the 20 or 30 years that the disease takes to manifest itself in a
chronic smoker is a sufficiently long lapse so as not to come to disquiet him.
Something similar occurs in our interaction with the environment: if
instead of a half century, it had taken a week for the gases that our vehicles
and our industries produce to cover the skies of our cities with a skin of
contaminant particles, surely we would have promptly suspended the activities
that provoked such a disaster. Yet a half a century, a little more than half a
life, is a sufficient lapse so that we become accustomed to the changes we
cause almost without noticing them, just as we do not take account of how we
are aging upon seeing ourselves every day in the mirror.
Additionally, it is well to recognize that such a perception of time
allows us to live life with a certain calm. If we could conceive of longer
magnitudes of time, perhaps we would be much more careful with our natural
surrounding, but doubtless we would live perpetually anxious against the very
narrow margin (80, perhaps 90 years with great luck) of life that we have ahead
of us at birth. If even thus we say that life seems to be one breath, what
would we say if we could conceive a century's duration as if we dealt with a
year?
Ehrlich concludes that, before this practical limitation we have of
conceiving large magnitudes of time, all that remains for us is to sharpen our
conscience of what we do and, thereby, rely on the instruments which our
ingenuity has allowed to develop. We cannot notice the damage that day by day
global warming is causing the planet, for example, but we have instruments that
are capable of doing so and with them we can measure this damage. Luckily, our
capacity to imagine numeric magnitudes is much less limited than that for
durations of time. If we know what is capable of causing a 30-foot wave, we can
easily conceive of that which would cause a 300-foot wave. And that is the size
that waves will form to be if we continue heating the earth.
The codes that we read on these apparatuses can show us what will happen
during a period too long for us to conceive of it, yet indubitably short if we
compare it with how long we have lasted on the planet.
Perhaps something similar occurs with the perception we have of sizes and
of distances, and with the same disgraceful consequences for our surroundings.
19 February 200557. Like a soap bubble
JUST AS the average duration of a human life is extremely short if compared
with the times taken by evolution, the average size of homo sapiens is
extremely small if compared, not to say with the magnitudes of the universe,
but simply with the dimensions of our own planet. If we imagine an ant of, let
us say, two millimeters in height next to a building 100 meters tall, this will
be 5,000 times taller than that. If we compare the height of a 5' 6" individual
with that of Mount Everest, it turns out that the individual is 5,000 times
smaller. And if, at the same time, we compare Mount Everest with the length of
the terrestrial circumference, it also turns out to be 5,000 times smaller. So
therefore, the ratio that exists between our size and that of the world is the
same as that existing between an ant and Mount Everest.
Let us put it in another manner: if the surface of a billiard ball is
observed with a powerful telescope, this will seem markedly rougher than the
earth's surface. The peaks and valleys that can be seen in the marble are
equivalent to mountains three times higher than Everest and depths several
times deeper than the deepest in our oceans.
With these comparisons perhaps we can give ourselves an idea of the
enormous size of our planet in relation to ourselves, but above all, of how
extremely thin is the terrestrial crust when compared with the size of the
planet. It is so thin that it cannot even sustain comparison with a layer of
the skin of an onion. That is, in proportion, hundreds of times thicker than
the terrestrial crust. Much better would be to compare it to bulk of a soap
bubble of, let us say, two yards across. Better yet would be to compare it with
a sphere of that size full of a viscous and fluid material enclosed in a solid
cap the thickness of a soap bubble. The solidity of the ground we walk on leads
us to think that the earth is solid. Reality tells us something quite
different. We literally are standing upon the cooled crust of an enormous hot
mass that does not stop flowing and moving.
Our oceans, with all the immensity that we attribute to them, form an even
thinner coat which rests upon the cold floor.
Until recently, our own smallness had left the terrestrial crust safe from
our everyday doings; but during the last two centuries our unmeasured
demographic growth and, above all, the huge quantity of energy that we have
managed to convert to our use begin to seriously threaten that thinnest of
membranes upon which we live.
It seems a small matter that we have caused, for example, the oceans of
the planet to augment their average temperature by one or two degrees. Yet if
we see it as it truly is, as a finite spot of water spread across the earth, as
if it were a puddle two millimeters deep across a surface of 500 square meters,
two degrees of temperature are capable of notably increasing the level of
evaporation and, with it, the intensity of the winds; which, in turn, alters
the global climate, and delivers droughts and floods, hurricanes et cetera.
Finally, if we manage to elevate that temperature a few more degrees,
surely we will burst the soap bubble and no intelligent biped shall survive to
tell of it.
26 February 2005POPULAR SCIENCE58. George Gamow (1904-1968)
THE SPREAD of scientific knowledge is an issue as important as it is
complicated. As well as informing the common man concerning the advances of
scientific knowledge and of the puzzles that modern science confronts, and
thereby making them more conscious of the challenges and risks that that
understanding represents, the propagation of science should be a determining
element in the vocational orientation of many young people. There are more than
a few distinguished scientists who have confessed that their interest in
science was born during their childhood after having read an article or book of
scientific popularization.
And we say that it is a complicated matter because it is not simple to
expound that which should be complex and apparently involved in simple,
comprehensible terms. What complicates the problem even more is that, in
general, the persons who thoroughly know a scientific theme have no interest in
communicating their understandings to those who are not their colleagues, or
when they do have it, lack the capacity to communicate with simplicity and
precision. The opposite must also occur: there are persons who have great
facility in communicating with their neighbor and an enormous interest and
enthusiasm for popularizing scientific knowledge; but, unfortunately, many of
them lack a deep understanding of the themes they popularize, which results on
many occasions in distorting the knowledge they offer for understanding. Very
few, and in the end very valiant, are the individuals who possess a solid
scientific background and at the same time a brilliant capacity to communicate
it.
George Gamow, who, if he had lived, would have completed 100 years this
past March 4th, was one of those. A native of Odessa, in the Soviet Union, from
his youth he showed interest in the world of science. He obtained the doctorate
in physics from the University of Leningrad in 1928, and six years later
abandoned his birth country to establish himself in the United States, where he
remained until his demise, in 1968.
His scientific work is impressive, as much for the importance of his
contributions as for the variety of subjects that captured his interest. One
could say it was he who heated up the Sun for, until Gamow proposed his theory
about the nuclear combustion in the heart of the stars, it was thought that our
ruling star is found in a continuous period of cooling. Gamow demonstrated that
the nuclear reactions that occur in the interior of the star mean that, far
from cooling, its temperature gradually increases until a distant day when the
atomic fuel will begin to run out. We shall not perish of cold, as was thought
at the beginnings of the last century; the theory that Gamow developed condemns
the earth to an infernal bonfire.
He also participated in structuring a model for the liquid core of the
atomic nucleus; a model that was the basis of the theories of nuclear fusion
and fission. His work on Lamaitre's theory of the creation of the universe is
responsible, in large part, for the modern cosmogonic theory known with the
name of great explosion or Big Bang. Biochemistry was not foreign to
this scientist either, to whom we owe the concept of the genetic code made up
of basic components of deoxyribonucleic acid.
As if all this were not sufficient, Gamow found a way to dedicate a good
part of his time to the spread of science, a field in which his contributions
were as important as the science itself.
27 March 200459. Mister Tompkins
GEORGE GAMOW launched his career in the popularization of science as an
authentic novelist. Just as Cervantes created his Quixote, Conan Doyle his
Sherlock Holmes or Malcolm Lowry his Consul, likewise the physicist of Russian
origin gave life to his principal protagonist and created, like the others had
done, a world around him.
We refer to the ineffable Mr. Tompkins, a sympathetic and pleasant person
of middle age who earned a living as a bank employee and who scientific
attainments did not go beyond what he learned in high school. Perhaps because
he had nothing better to do, or perhaps for another more intimate reason we
shall detail below, Mr. Tompkins had "the bravery," as the author tells us, "to
participate in some semi-popular conferences about problems in modern physics."
The discussions that Mr. Tompkins heard in general he found a hard bone
for his jaw to chew, such that it was common for him to allow his fatigued head
to rest for a refreshing siesta halfway through the sessions. Yet in the
case of Mr. Tompkins, as had occurred innumerable times in the literature, the
dreams he had during his naps were revelatory: in that dream world he
interpreted, through apparitions of fantastical personages or delirious
situations, that which the tiring voice of the speaker caused to arrive at his
ears.
In dreams it is not difficult to see our dimensions shrunken so that we
stride through the labyrinth of atoms of a metal, simultaneously dodging the
speedy electrons that shoot by on one side or another; nor is it difficult to
mount a ray of light and travel with it to the limits of the universe; there is
no danger if we submerge ourselves in the nucleus of the sun and witness the
formidable explosions that occur in it and we can wait without desperation the
million years it takes to arrive to the surface of a star; in the world of
Orpheus we can easily scale the spiral staircase that forms the basis of DNA
and contemplate how it holds the amino acids in its structure to form long
chains of proteins; we can, finally, see how the miracle of life began in that
primitive ocean which breathed ammonia and carbon dioxide and was riven by
continual electrical discharges and penetrating beams of ultraviolet light.
Thus, among dreams and lectures, Mr. Tompkins unexpectedly widened his
vision of the world and his comprehension of the things that surround him.
Something similar happens to the reader who becomes engrossed in the
description of his adventures.
If that were not enough, his assiduous attendance at those conferences
rewarded our hero with a female lifelong companion. Maud, the daughter of the
most frequent and wisest attender, looked favorably upon the timid bank
employee who invariably slept during the lectures imparted by the father of the
girl.
There was a wedding, and the adventures and the dreams about the world of
science (Maud also possessed the gift of dreaming) moved from the conference
hall to the Tompkins' home, where the father-in-law took advantage of the
slightest excuse to indoctrinate the son-in-law in scientific lore; and Gamow,
in turn, took advantage of the opportunity to write more and more stories.
It suffices to say that, as should occur in children's tales, Mr. Tompkins,
Maud and the professor were very happy, as have been the thousands of readers
who have had the opportunity to sink a tooth into the novels of George Gamow.
10 April 200460. Scientific popularizers
IN RELATION to the article I published on another occasion that spoke of the
labor of the Russian-North American physicist George Gamow in the field of the
popularization of science, Mr. Jorge Luis Serrano Texta, a neighbor in Mexico
City, had the kindness to send me electronic mail in which he reminds me that,
although they are not as many as we might like, there are a good number of
scientists who, in addition to excelling in the professional development, are
excellent communicators.
I transcribe the commentary of Mr. Serrano, for I consider the examples he
suggests are very fitting and worth reading:
Allow me to interrupt your valuable time to place under your consideration
a brief commentary concerning your interesting contribution of March 27th
in the weekly, Labyrinth, where you refer to the scientist George
Gamow.
It seems to me that some distinguished scientists have been able to
lucidly communicate many themes that seemed thorny and destined for an
elite, for example, Isaac Asimov with his New guide to science
(Basic Books, 1984), a book in which he manages to captivate the ordinary
reader, since he proceeds in simple and entertaining language from the
origin of the universe to the emphasis on thermodynamics.
We cannot omit the important presence of the Englishman Stephen
Hawking, and his most popular book, A brief history of time
(Bantam, 1998) and the great popularizer of science of United States
origin, Carl Sagan, with his brilliant Billions and billions
(Ballantine, 1998). Bondi, Bonnor, Lyttleton et al achieved this difficult
objective in their exposition, El origen del universo (FCE, 1977)
in which their dissertations were converted to a comparative report.
This year, the noble institutions FCE, SEP and Conacyt, through "La
Ciencia desde Mexico" (actually Science for All) have contributed towards
enriching scientific culture in non-specialized readers with an
outstanding recent book, Why there are no extraterrestrials on
Earth from the teacher Armando Arellano Ferro, whose forceful
affirmation in the title is supported with ample, irrefutable arguments
from a scientific point of view. He removes the blindfold that covers our
eyes for he provides the tools to doubt the apparent seriousness in which
the theme of the existence of extraterrestrial life has been clothed.
Of course "From the Ravine," of your authorship, and "The Uranium
Mine" by E. Monteverde are the best cultural columns given the task of
propagating scientific knowledge in the Mexico of today.
(I am grateful for this last commentary and I clarify that if I dare
to transcribe it, it is so not to omit the observation which our writer
makes concerning the master Monteverde.)
And while we are on this point, I would add to the list some brilliant
scientists who come to mind. Albert Einstein, the most well-known physicist of
the 20th century, wrote, together with Leopold Infeld, a fine book called
The evolution of physics (Touchstone, 2008) in which the authors take
the reader by the hand through the fascinating byways of modern physics. The
great French mathematician and thinker Henri Poincaré also broke into the
field of popularizing science with an essay entitled, Science and
hypothesis (Cosimo Classics, 2007) in which he sharply revises the
scientific understanding of the 19th century. Bronowski, in his "Ascent of man"
(Little Brown, 1976) guides us, like a modern Virgil, through the formidable
epic of scientific thought.
3 April 200461. Comical history and trip to the Moon
ONCE Garcia Márquez, referring to the intimate and irreplaceable
Diccionario de uso del español by María Moliner, noted a
touching error that appears in the book: "Day--says Moliner--is the
space of time the sun takes to make a complete revolution around the earth."
Perhaps the involuntary mistake of the authoress of the dictionary is because
our consciousness rebels at the idea that the huge mass upon which we live
moves, and at a vertiginous velocity.
Although today we know that more than 2,000 years ago the astronomers who
met in Alexandria held acrid debates about which was the fixed center of the
universe (whether the Earth or the Sun), the geocentric thesis, sustained by
Hipparchus and Claudius Ptolemy and endorsed with the impeccable authority of
Aristotle, finally imposed itself and no one put it seriously in doubt over
1,500 years. Lately, to suppose that the earth is fixed and that the sun and
the other stars rotate around it comports well with our common sense and with
daily experience.
If it was not easy for Copernicus to convince his colleagues, learned men
from beginning to end, of the mobility of our planet, it must have been much
more difficult to convince the other mortals.
For some time I have been investigating how the idea of heliocentrism kept
penetrating western societies until it became converted into an irrefutable
truth that we are taught in the schools from when we are infants, and have
found that the process was much longer and more tortuous than I supposed at the
beginning. For instance, more than a two thirds portion of French peasants
during the second half of the 18th century still were convinced that the earth
remained in repose and all the bodies revolved around her. In this search, I
just uncovered a truly surprising text. It consists of a tale called,
Comical history of the states and empires of the Moon, which came from
the pen of the legendary Cyrano de Bergerac (that indomitable swashbuckler, as
wise as he was combative, whom Edmond Rostand immortalized in his tragicomedy
of the same name that he published in 1897) in the middle of the 17th century
and who perhaps was one of the pioneers in the genre of science fiction.
In broad strokes, the story deals with a person who was convinced that the
moon, as one of the other celestial bodies which rotate around the sun, had
types of life very similar to those of the earth, including, recently, the
ferocious human beings. With the goal of proving his theory, he designs an
ingenious contrivance that he imagines will serve to transport him to the moon
and is distinguished by its simplicity: he attaches myriad jars full of dew and
awaits the light of dawn to start the voyage, for it is well known that the
heat of the sun attracts the morning dew into the heavens. The apparatus works,
perhaps too well, for no sooner does the home star appear than our personage
lifts into rapid flight...but towards the sun, not the moon. Frightened by the
imminent catastrophe of being fried alive, he decides to break some of the jars
to diminish the speed of the flight. He ends up breaking almost all of them,
such that it ends by returning to earth. Upon touching down, somewhat battered
by the impact, he does not recognize where he is. Shortly there appear some
curious little men dressed as God delivered them who observe him with curiosity
and fear. On speaking with one of them, he discovers that he has fallen into
France, but the new one, that is, into Canada. "How is it possible in so few
hours to cover that enormous distance?" he asks. He himself answers: "While I
was on high, the earth continued to revolve, such that what moved was it and
not myself." A little later, with the only civilized man extant in the remote
Canadian hinterland, he expounds his explanation with one of the loveliest
discourses in defense of the heliocentric theory:
And for another thing, what evidence do you have to think that the sun
does not move when we see that it does? And that the earth whirls around
it so fast when we feel the ground motionless beneath our feet?
Monsieur, I answered, I will tell you why we are obliged to think so.
First, it is commonly accepted that the sun is in the center of the
universe, since all the bodies in nature need this primordial fire at the
heart of the realm to meet their needs promptly. Also, the cause of
procreation is placed equally in the middle of bodies, just as wise nature
has placed the genitals in man, seeds in the center of apples and pits in
the middle of other fruit. Likewise, the onion protects with the hundred
skins that envelop it the precious seed from which ten million more will
take their essence. The apple is a small universe in itself, and the sun
is the seed, warmer than the other parts. That globe sheds the heat that
preserves it. And the onion seed is the small sun of that little world; it
warms and nourishes the vegetative salt of that body.
Given that, I say that the earth needs the light, heat and influence
of that great fire. It revolves about it to receive equally in all its
parts the energy that preserves it. It would be as ridiculous to believe
that that great luminous body revolved about a point that it has nothing
to do with as to imagine that when we see a skylark being roasted that the
fireplace revolves around it in order to cook it. Besides, if the sun had
to do all that work, it would be like saying that medicine needed a
patient, that the strong had to yield to the weak, the greater serve the
lesser, and that, instead of a vessel sailing along the coast of a
province, the province moved around the vessel.
* www.bewildering stories.com/issue28/cyrano3.html
This was written only 22 years after Galileo had to retreat from the
vehement defense of heliocentrism that he had published in his Dialogue
concerning the two chief world systems and which condemned him to live in
captivity the remainder of his days. But Cyrano was not only a good reader of
Galileo, but also went even further than the master who no doubt inspired him:
a little further along in the dialogue that we transcribe, the protagonist
affirms:
I think the planets are worlds revolving around the sun and that the fixed
stars are also suns that have planets revolving around them. We can't see
those worlds from here because they are so small and because the light
they reflect cannot reach us. How can one honestly think that such
spacious globes are only large, deserted fields? And that our world was
made to lord it over all of them just because a dozen or so vain wretches
like us happen to be crawling around on it?
* en.wikiquote.org/wiki/Cyrano_de_Bergerac17 May 200462. The passing of the years
TODAY our life begins a new cycle of 365 days. Behind an act as simple as
turning the page of the calendar there is a fascinating story that illustrates
like few others the extent of human ingenuity.
From very remote times humans observed regularity in the manifestation of
certain natural phenomena: invariably the sun rose in the east after the same
duration having elapsed; similarly, the moon appeared full high in the sky
after a precise interval of time, and the sun set exactly at the same point on
the horizon at the end of a year. Thus, nature itself offered man a very
effective medium in which to carry out the range of his most important
activities, that is, to create an almanac where the dates would appear of the
events that, for one reason or another, he wished to recall.
It is known that the Egyptians, the Sumerians and the people of Meso-
America succeeded in implementing surprisingly precise calendars. However, our
own is the heir of one which, in its origins, was far from being so. The
tradition recounts that the mythical Romulus was responsible for the numeration
of the Roman calendar, which was a hybrid of the solar and the lunar calendars.
The record of this calendar began with the founding of the city of Rome, in the
year 753 b.C. and consisted of ten months of 30 days. There were no months for
winter, for it was thought that during that interval human activities were
interrupted. The 65 days lacking to complete the solar year were added
throughout the year to the taste of the priests and high functionaries to
commemorate events whether civic or religious.
The people were not permitted access to the calendar. In truth, except for
the powerful, no Roman knew in which day they lived. By the year 550 b.C. Rome
already was a vigorous nation that could not allow itself the luxury of having
an imprecise calendar unknown to the population. Therefore, the consul Numa
Pompilius had it published and changed it adding the months of January and
February to sum to 355 days and added to it an extra month every two years with
the goal of squaring it with the solar year. But the latter measures 365 1/4
days, which is why for the year 46 b.C. the calendar of Pompilius produced
winter in the autumn months and autumn in those of summer. The great Julius
Caesar ordered the problem of the calendar definitively resolved. He
constituted it with 12 months alternating between 30 and 31 days, except
February, which only had 29 days, to add to 365 and a day in February would be
added every four years (the leap year) to adjust for the surplus of a fourth of
a day of the solar year. He took the occasion also to baptize the seventh month
with his own name. The year in which this adjustment was made, with the purpose
that the spring equinox would fall of the 21st of March, had 15 months, and was
known as the last year of confusion. His grandnephew and successor, the
tireless Octavian Augustus, who not wanting to be left behind baptized the
month that follows July with his name; yet since no one could stand that it
should have a day less than its predecessor, he ordered that August also would
have 31 days, which in the final account affected February, which remained with
only 28.
The Julian calendar (which Christianity adopted, changing the beginning
date to the supposed one for the birth of Christ) functioned reasonably well,
although it was slightly longer than the solar year: every 129 years it was a
day ahead of the latter. In 1582, a year in which the spring equinox fell on
March 10th due to this advance, Pope Gregory XIII ordered that the calendar be
newly adjusted. Ten days were added to the month of October and it was agreed
to add three days every 400 years. That is our actual calendar today. It is
quite precise, although in 4317 it will have gained a day on the solar year.
We shall not be there then to correct it. Happy new year!
1 January 2005