The Ancients

When early forms of man first began to use tools, they took nature as they
found it. The thighbone of a large animal made a handy club; so did the branch
torn from a tree. A rock was a convenient missile.

As millennia passed, men learned to shape rocks to give them cutting edges,
or a gripping end. They learned to fit rocks into wooden handles, shaped for
the purpose. Nevertheless, rock remained rock and wood remained wood.

However, there were times when the nature of substances did change.
Lightning might set fire to a forest and the blackened or powdery ash was
nothing like the wood that had existed before. Again, meat might decay and
smell bad; fruit juice might grow sour on standing, or become oddly stimulating
to drink.

It is such changes in the nature of substances (accompanied, as mankind
eventually discovered, by fundamental changes in structure) that form the
subject matter of the science we now call chemistry. Fundamental
alteration in the nature and structure of a substance is chemical change.

The opportunity to bring about chemical change deliberately for his own
benefit arrived when man had mastered the art of starting and maintaining a
fire. (This in historical terms was the "discovery of fire.") That
art achieved, man became a practicing chemist, for he had to devise methods for
causing wood, or other combustible material, to combine with air at a rate fast
enough to produce sensible heat and light, as well as ashes, smoke, and vapors.
Thus, wood had to be dried, some of it had to be powdered to tinder,
temperatures had to be raised to the ignition point by friction or otherwise,
and so on.

The heat produced by fire could be used to bring about further chemical
changes. Food was cooked and its color, texture, and taste thereby altered.
Clay could be baked into bricks and pottery. Eventually, ceramics, glazes, even
forms of glass itself, could be formed.

The first materials used by man were those universals he found all about:
wood, bone, hide, rock. Of these, rock is most durable and it is early man's
stone implements that remain today as clear reminders of that long-gone time.
So we speak of the Stone Age.

Mankind was still in the Stone Age when, about 8000 B.C., a revolutionary
change in food production was introduced in certain regions of what is now known
as the Middle East. Previously, man had hunted food as any other animal might.
Now he learned to domesticate animals and care for them as a reliable food
supply. Even more important, he learned to cultivate plants. With animal
husbandry and agriculture developed, a more stable and ample food supply was
available, and the population increased. Agriculture required men to remain in
one place so that permanent habitations were built and cities developed. That
evolution marks the beginning of civilization, for the word comes from the Latin
term for "city."

For the first couple of thousands of years of this earliest civilization,
stone remained the characteristic tool material, although new techniques for
handling it were devised. This New Stone Age, or Neolithic Period,
was characterized by the careful polishing of stone. Pottery, too, reached an
advanced stage of development. Slowly, the advances of the Neolithic period
spread out from its Middle Eastern center. By 4000 B.C. the characteristics of
the culture had appeared in western Europe. By then the time was ripe for
additional changes in the Middle East - in Egypt and Sumeria (the region now
occupied by the modern nation of Iraq).

Mankind began to learn to make use of comparatively rare materials. For the
sake of the useful properties of the new materials, men learned to undergo all
the inconveniences of tedious searching and processing. We call these materials
metals, a word which in itself expresses this early change, for it is
believed derived from a Greek word meaning "to search for."

The first metals must have been found existing in the form of nuggets. They
must have been pieces of copper or gold, for these are among the
few metals occasionally found free in nature. The reddish color of copper or
the yellowish color of gold must have caught the eye; and the metallic luster,
which is so much more startling and beautiful than the flat, nondescript
coloring of most stones, must then have held it. Undoubtedly the first use of
metals was as ornaments, in whatever form the pieces had been found, much as
colored pebbles or pearly sea shells might have so been used.

The advantage of metals over other pretty bits of matter lay in this,
however: Copper and gold are malleable; that is, they can be beaten flat
without breaking. (Stone, so treated, would powder to dust; wood or bone would
split and splinter.) This property undoubtedly was discovered by accident, but
it could not have been long after the discovery when man's sense of artistry
caused him to beat metal nuggets into intricate shapes that would enhance their
beauty.

Workers in copper were bound to notice that this metal could easily be
beaten into a sharper edge than could be produced on a tool of rock, and that
some copper edges would hold their sharpness under conditions that would blunt a
rock edge. Furthermore, a copper edge, once blunted, could be sharpened again
more easily then a stone edge could. Only the rarity of copper prevented its
widespread use for tools as well as ornament.

Copper became less rare when it was discovered that it need not be found as
copper. It could be manufactured out of stone. How this discovery was made, or
where, or when, is not known exactly and may never be known.

We might guess that the discovery could have been made in a wood fire
started in a bead of rocks that included some bluish ones. In the ashes,
afterward, globules of gleaming copper might have been found. Perhaps this
happened many times before it eventually dawned on someone that if the proper
blue rocks were found, heating them in a wood fire would produce copper every
time. The final discovery of this fact may have taken place about 4000 B.C. and
it may have happened in the Sinai peninsula, just east of Egypt, or in the
mountainous area east of Sumeria, in modern Iran. Perhaps it happened
independently in both places.

In any case, copper became common enough to be used for tools, at least in
the advanced centers of civilization. A frying pan of copper found in an
Egyptian tomb has been dated as 3200 B.C. By 3000 B.C. a particularly hard
variety of copper was discovered. It was produced (probably by accident) by the
simultaneous heating of copper ores and tin ores. The copper-tin alloy
(the term used for a mixture of metals) is called bronze, and by 2000
B.C. bronze was common enough to be used for weapons and armor. Egyptian bronze
tools have been found in the tomb of the Pharaoh Iteti, who reigned about 3000
B.C.

The most famous event of the Bronze Age was the Trojan War, in which
bronze-clad, bronze-shielded warriors flung bronze-tipped spears at each other.
An army without metal weapons couldn't possibly stand against the bronze
warriors, and the metalworker of that day had something of the prestige of the
nuclear physicist of today. The smith was a mighty man indeed, and was even
accorded a place among the gods. Hephaestus, the lame god of the forge, was the
divine smith of Greek mythology. And even today, it is no accident, that "Smith"
or its equivalent is the most common name among the European peoples.

Lightning struck twice. The men of the Bronze Age knew of a metal even
harder than bronze. This was iron. Unfortunately, it was too rare and
precious to use for armor. At least, it seemed rare, for the only samples found
in early times were bits of shattered meteorites, which are not common. Nor did
there seem to be any way of obtaining iron out of rock.

The trouble was that iron was more firmly bound into its ore form than
copper was. It required more intense heat to smelt iron than to smelt copper.
A wood fire was insufficient for the purpose. The hotter charcoal fire was
required, and even then only under conditions of good ventilation.

The secret of smelting iron was finally stumbled upon in eastern Asia Minor,
perhaps as early as 1500 B.C. The Hittites, a people who built a great empire
in Asia Minor, were the first to use iron routinely for tools. Letters dated
about 1280 B.C., from a Hittite king to his viceroy in an iron-rich mountain
region, make definite references to iron production.

Iron in pure form (wrought iron) is not very hard. However, an iron
implement or weapon may pick up enough carbon from charcoal to form a surface
layer of the iron-carbon alloy we call steel. This skin is harder than
even the best bronze, and hold a sharper edge longer. It was this discovery of
"steeling" in Hittite territory that was the crucial turning point in
iron metallurgy. an army clad in hard iron and armed with hard iron was
reasonably sure to defeat another army clad in and armed with bronze. Thus came
the Iron Age.

The Dorians, a barbaric Greek tribe equipped with some iron weapons, invaded
the Greek peninsula from the north in about 1100 B.C. and gradually overcame the
more civilized but only bronze-armed Mycenaean Greeks who were already on the
scene. Some Greeks penetrated to Canaan and brought iron weapons with them.
These were the Philistines, who play so important a role in the early books of
the Bible. Against them the Israelites were helpless until they obtained iron
weapons for themselves under King Saul.

The first army to be equipped with good iron weapons in quantity was the
Assyrian. By 900 B.C. superior armament helped them to build a mighty empire
for themselves.

Before the dawn of the great days of Greece, then, the practical chemical
arts had reached a good state of advancement. This was particularly true in
Egypt, where there was great religious interest in methods for the embalming and
preserving of the human body after death. Egyptians were expert not only in
metallurgy but also in the production of pigments from the mineral world and
juices and infusions from the plant world. (The chemical arts were also
developed in India and China).

According to one theory, the word khemeia derives from the
Egyptians' name for their own land, Kham. (This name is also used in
the Bible which, in the King James Version, becomes Ham.) Khemeia
therefore might be "the Egyptian art."

A second theory, somewhat more favored is that khemeia is
derived from the Greek Khumos, meaning the juice of a plant, so
that khemeia may be considered as "the art of extracting
juices." Or the juice referred to may even have been molten metal so that
the word may mean "the art of metallurgy."

Whatever the source of khemeia, it is the ancestor of our
word "chemistry."

By 600 B.C., the volatile and intelligent Greeks were turning their
attention to the nature of the universe and to the structure of the materials
composing it. The Greek scholars or "philosophers" (lovers of wisdom)
were concerned not so much with technology and with practical developments, as
with the "why" of things. In short, they were the first we know of to
deal with what we would call chemical theory.

Such theory begins with Thales (c.640-546 B.C.).
There may have been Greeks before Thales, and even men before the Greeks, who
thought deeply and well about the meaning behind changes in the nature of
matter, but if so, their names and thoughts are lost to us.

Thales was a Greek philosopher, living in Miletus in Ionia, a region on the
western Aegean coast of what is now the nation of Turkey. Thales must have
asked himself the question: If one substance can be changed into another, as a
bluish rock can be changed into red copper, what is the true nature of the
substance? Is it rock or is it copper? Or are both something else entirely?
Can any substance be changed into any other substance, so that all substances
are different aspects of one basic material?

To Thales the answer to the last question seemed yes, if only because a
basic simplicity and order could be introduced into the universe. What
remained, then, was to decide what that one basic material, or element,
might be. ("Element" is a Latin word of uncertain origin. The Greeks
did not use it, but it is so important to modern chemistry that there is no way
of avoiding its use even with reference to Greek times.)

Thales decided the element was water. Of all substances water seemed
present in greatest quantities. Water surrounded the land; it permeated the
atmosphere in vapor form; it trickled through the solid earth; life was
impossible without it. He visualized the earth as a flat disk, topped by a
semi-sphere of sky, and floating on an infinite ocean of water.

Thales' decision that there was an element of which all substances was
formed met with considerable acceptance among later philosophers. His decision
that the element was water was, however, disputed.

In the century after Thales, astronomical thinking came little by little to
the conclusion that the sky was not a semi-sphere, but a complete one. The
earth, also spherical, was suspended at the center of the hollow sphere of the
sky.

The Greeks did not accept the notion that a vacuum (complete emptiness)
could exist, so they did not believe that the space between the suspended earth
and the distant sky could contain nothing. Since the portion of the space
between earth and sky that men could experience directly contained air, it
seemed reasonable to suppose there was air all the way.

It may have been reasoning of this sort that led the Greek philosopher
Anaximenes, also of Miletus, to conclude, about 570 B.C., that air was the
element of the universe. He felt that toward the center of the universe it was
compressed, forming the harder and denser varieties of substance such as water
and earth.

On the other hand, the philosopher Heraclitus
(c.540-475 B.C.), from the neighboring town of Ephesus, took a different tack.
If it was change that characterized the universe, then for the element one ought
to seek a substance for which change was most characteristic. This substance,
to him, seemed to be fire, ever shifting, ever changing. It was the fieriness
in everything that made change so inevitable. (It is easy to smile at these
early notions, but actually these Greek guesses were quite profound. Suppose we
substitute for "air", "water", "earth", and "fire",
the very similar terms "gas", "liquid", "solid",
and "energy". It is true that gases will condense to liquids if
cooled and to solids if cooled still further. This is much like the situation
Anaximenes imagined. And Heraclitus's views concerning energy as both agent and
consequence of chemical change.)

In the time of Anaximenes, the Persians had conquered the Ionian coast.
When an Ionian revolt failed, Persian rule became harsh, and under suppression
the scientific tradition faded-not, however, before migrating Ionians had
carried that tradition westward. Pythagoras of Samos
(c.582-497 B.C.), native of an island of Ionia, left Samos in 529 B.C. and
traveled to southern Italy, where his teachings left behind an influential body
of thinking.

Eminent among those who adhered to the Pythagorean teachings was the Greek
philosopher Empedocles (c.490-c.430 B.C.), a native of
Sicily. He, too, labored over the problem of the element out of which the
universe was formed. There seemed no way of deciding among proposals advanced by
the Ionians, so Empedocles hit upon a compromise.

Why must there be but a single element? Why not four? There could be the
fire of Heraclitus, the air of Anaximenes, the water of Thales, and the earth,
Which Empedocles himself added.

This doctrine of the four elements was accepted by the greatest of
the Greek philosophers, Aristotle (384-322 B.C.).
Aristotle did not consider the elements to be literally the substances named.
That is, he did not imagine that the water we could touch and feel was actually
the element "water"; it was merely the closest actual substance to it.

Aristotle viewed the elements as combinations of two pairs of opposed
properties: hot and cold, dry and moist. He did not believe that one property
could combine with its opposite, so in his scheme four possible combinations
were left, each of which represented a different element. Hot-and-dry was fire,
hot-and-moist was air, cold-and-dry was earth, and cold-and-moist was water.

He took one further step. Each element had its own innate set of
properties. Thus it was the nature of earth to fall and of fire to rise. The
heavenly bodies, however had properties that seemed to differ from those of any
substance on earth. Instead of either rising or falling, the heavenly bodies
seemed to move in unchanging circles about the earth.

Aristotle therefore reasoned that the heavens had to be composed of a fifth
element, which he called "ether" (from a word meaning "to glow",
since the most characteristic property of the heavenly bodies was that they were
luminous). As the heavens seemed unchanging, Aristotle considered the ether to
be perfect, eternal, and incorruptible, quite different from the four imperfect
elements of the earth itself.

The notion of the four elements held sway over the minds of men for two
thousand years. Though now dead, as far as science is concerned, it still lives
on in our common phrases. We speak of the "raging of the elements:, for
instance, when we wish to say that wind (air) and waves (water) are driven to
fury by a storm. As for the "fifth element" (ether), the phrase
becomes quinta essentia in Latin, and we still mark its Aristotelian
perfection when we speak of "quintessence" of anything, meaning that
thing in its purest and most concentrated form.

Another major question arouse among the Greek philosophers, one involving
the divisibility of matter. The fragments of a stone, broken in two or even
reduced to powder, were still stone, and each fragment could be further
subdivided. Could such division and subdivision of matter proceed endlessly?

The Ionian Leucippus (c.450 B.C.) seems to have
been the first to question the perhaps natural assumption that any piece of
matter, however small, could be divided into still smaller pieces. Leucippus
maintained that eventually a piece would be obtained which was as small as it
could be and was not subject to further division.

His disciple Democritus (c.470-c.380 B.C.), of the
northern Aegean town of Abdera, continued this line of thought. He named these
ultimately small particles atomos, meaning "indivisible", and
we inherit this word as atom. The doctrine that matter is made up of
ultimately small particles and is not indefinitely divisible is known as atomism.

It seemed to Democritus that the atoms of each element were distinct in size
and shape and that it was this distinction that made each element different in
properties. The actual substances we could see and handle were composed of
mixtures of the atoms of the different elements, and one substance could be
changed into another by altering the nature of the mixture.

All this sounds remarkably modern to us, but Democritus had no way of
appealing to experiments for corroboration. (The Greek philosophers did not
experiment but came to their conclusions by arguing from "first principles".)

For most philosophers, and especially for Aristotle, the notion of a piece
of matter that could not be split into still smaller pieces seemed so
paradoxical that they could not accept it. The atomistic view therefore
remained unpopular and, for two thousand years after the time of Democritus,
little heard of.

Atomism did not die out altogether, however. The Greek philosopher
Epicurus (c.342-270 B.C.) made atomism part of his way
of thought, and Epicureanism won many adherents in the next few centuries. One
of these adherents was the Roman poet Titus Lucretius
Carus (c.95-c.55 B.C.), usually known simply as Lucretius. He expounded the
atomist viewpoint of Democritus and Epicurus in a long poem entitled De Rerum Natura ("On the Nature of
Things"). It is considered by many to be the finest didactic poem (one
intended to teach) ever written.

In any case, while the works of Democritus and Epicurus perished so that
only scraps and quotations remain, Lucretius's poem survived in full, and
preserved the atomist view into modern times, when new scientific methods
entered the struggle and brought it a final victory.