New Brian Tung Essays

The Moron's Guide to Kerberos, Version 2.0

Brian Tung <brian@isi.edu>

[Downloaded from <gost.isi.edu/brian/security/kerberos.html>]

What follows is a brief guide to Kerberos: what it's for, how it works, how to use it. It is not for system administrators who want to know why they can't make the latest release, nor is it for applications programmers who want to know how to use the interface. It certainly isn't for Kerberos hackers. You know who you are.

What is Kerberos?

Kerberos is an authentication service developed at MIT, under the auspices of Project Athena. Its purpose was and is to allow users and services to authenticate themselves to each other. That is, it allows them to demonstrate their identity to each other.

There are, of course, many ways to establish one's identity to a service. The most familiar is the user password. One "logs in" to a server by typing in a user name and password, which ideally only the user (and the server) know. The server is thus convinced that the person attempting to access it really is that user.

Above and beyond the usual problems with passwords (for instance, that most people pick abysmal passwords that can be guessed within a small number of tries), this approach has an additional problem when it's translated to a network: The password must transit that network in the clear--that is, unencrypted. That means that anyone listening in on the network can intercept the password, and use it to impersonate the legitimate user. Distorting the password (for instance, by running a one-way hash over it) does no good; so long as identity is established solely on the basis of what is sent by the user, that information can be used to impersonate that user.

The key innovation underlying Kerberos (and its predecessors) is the notion that the password can be viewed as a special case of a shared secret--something that the user and the service hold in common, and which (again ideally) only they know. Establishing identity shouldn't require the user to actually reveal that secret; there ought to be a way to prove that you know the secret without sending it over the network.

And indeed there is. In Kerberos and related protocols, that secret is used as an encryption key. In the simplest case, the user takes something freshly created, like a timestamp (it need not be secret), and encrypts it with the shared secret key. This is then sent on to the service, which decrypts it with the shared key, and recovers the timestamp. If the user used the wrong key, the timestamp won't decrypt properly, and the service can reject the user's authentication attempt. More importantly, in no event does the user or the service reveal the shared key in any message passed over the network.

Of course, Kerberos is more complex than that, but broadly speaking, those complexities are there to do one of two things: to patch some of the problems caused even when using shared secrets in this improved way; and to make use of this shared secret more convenient. In this short tutorial, I'll discuss at a high level how Kerberos works, and why it's designed the way it is.

Incidentally, since one of Kerberos's underlying mechanisms is encryption, it pays to be clear about what kind of encryption we're discussing. Kerberos, as defined in RFC 4120, uses only so-called conventional or symmetric cryptography. In this kind of cryptography, there is only one key, which is shared by the two endpoints. The key is used to encrypt a message, and on the other end, the same key is used to decrypt that message (hence the name, symmetric cryptography).

There is another kind of cryptography, called public-key cryptography, in which there are two keys: a public key, and a private key. The public key, as its name implies, is publicly known and can be used, by anybody, to encrypt a message; to decrypt that message, though, one needs the private key, which is only known by one user, the intended recipient. (One could also encrypt with the private key and decrypt with the public key, and we'll see an example of that below.) Because of the two different keys, public-key cryptography is sometimes known as asymmetric cryptography. Kerberos, by default, does not use public-key cryptography, but RFC 4556, which I co-authored, adds public-key cryptography to the initial authentication phase; I'll say more about this in a bit.

The Basics of Kerberos

Kerberos's fundamental approach is to create a service whose sole purpose is to authenticate. The reason for doing this is that it frees other services from having to maintain their own user account records. The lynchpin to this approach is that both user and service implicitly trust the Kerberos authentication server (AS); the AS thus serves as an introducer for them. In order for this to work, both the user and the service must have a shared secret key registered with the AS; such keys are typically called long-term keys, since they last for weeks or months.

There are three basic steps involved in authenticating a user to an end service. First, the user sends a request to the AS, asking it to authenticate him to the service. Fundamentally, this request consists only of the service's name, although in practice, it contains some other information that we don't have to concern ourselves with here.

In the second step, the AS prepares to introduce the user and the service to each other. It does this by generating a new, random secret key that will be shared only by the user and the service. It sends the user a two-part message. One part contains the random key along with the service's name, encrypted with the user's long-term key; the other part contains that same random key along with the user's name, encrypted with the service's long-term key. In Kerberos parlance, the former message is often called the user's credentials, the latter message is called the ticket, and the random key is called the session key.

At this stage, only the user knows the session key (provided he really is the user and knows the appropriate long-term key). He generates a fresh message, such as a timestamp, and encrypts it with the session key. This message is called the authenticator. He sends the authenticator, along with the ticket, to the service. The service decrypts the ticket with its long-term key, recovers the session key, which is in turn used to decrypt the authenticator. The service trusts the AS, so it knows that only the legitimate user could have created such an authenticator. This completes the authentication of the user to the service.

There is a version of Kerberos called Bones, which is exactly like Kerberos, except that Bones doesn't encrypt any of the messages. So what is it good for? The U.S. restricts export of cryptography; if it's sufficiently advanced, it qualifies as munitions, in fact. At one time, it was extraordinarily difficult to get crypto software out of the U.S. On the other hand, there is a wide variety of legitimate software that is exported (or created outside the U.S. altogether), and expects Kerberos to be there. Such software can be shipped with Bones instead of Kerberos, tricking them into thinking that Kerberos is there.

Doug Rickard wrote to explain how Bones got its name. In 1988, he was working at MIT, with the Project Athena group. He was trying to get permission from the State Department to export Kerberos to Bond University in Australia. The State Department wouldn't allow it--not with DES included. To get it out of the country, they had to not only remove all calls to DES routines, but all comments and textual references to them as well, so that (superficially, at least) it was non-trivial to determine where the calls were originally placed.

To strip out all the DES calls and garbage, John Kohl wrote a program called piranha. At one of their progress meetings, Doug jokingly said, "And we are left with nothing but the Bones." For lack of a better term, he then used the word "Bones" and "boned" in the meeting minutes to distinguish between the DES and non-DES versions of Kerberos. "It somehow stuck," he says, "and I have been ashamed of it ever since."

Back at Bond University, Errol Young then put encryption back into Bones, thus creating Encrypted Bones, or E-Bones.

Sometimes, the user may want the service to be authenticated in return. To do so, the service takes the timestamp from the authenticator, adds the service's own name to it, and encrypts the whole thing with the session key. This is then returned to the user.

The Ticket Granting Server

One of the inconveniences of using a password is that each time you access a service, you have to type the darned thing in. It can be a tremendous nuisance, if you have to access a variety of different services, and so the temptation is to use the same password for each service, and further to make that password easy to type. Kerberos eliminates the problem of having passwords for each of many different services, but there is still the temptation of making the one password easy to type. This makes it possible for an attacker to guess the password--even if it is used as a shared secret key, rather than as a message passed over the network.

Kerberos resolves this second problem by introducing a new service, called the ticket granting server (TGS). The TGS is logically distinct from the AS, although they may reside on the same physical machine. (They are often referred to collectively as the KDC--the Key Distribution Center, from Needham and Schroeder [1].) The purpose of the TGS is to add an extra layer of indirection so that the user only needs to enter in a password once; the ticket and session key obtained from that password is used for all further tickets.

So, before accessing any regular service, the user requests a ticket from the AS to talk to the TGS. This ticket is called the ticket granting ticket, or TGT; it is also sometimes called the initial ticket. The session key for the TGT is encrypted using the user's long-term key, so the password is needed to decrypt it from the AS's response to the user.

After receiving the TGT, any time that the user wishes to contact a service, he requests a ticket not from the AS, but from the TGS. Furthermore, the reply is encrypted not with the user's secret key, but with the session key that came with the TGT, so the user's password is not needed to obtain the new session key (the one that will be used with the end service). Aside from this wrinkle, the rest of the exchange continues as before.

It's sort of like when you visit some workplaces. You show your regular ID to get a guest ID for the workplace. Now, when you want to enter various rooms in the workplace, instead of showing your regular ID over and over again, which might make it vulnerable to being dropped or stolen, you show your guest ID, which is only valid for a short time anyway. If it were stolen, you could get it invalidated and be issued a new one quickly and easily, something that you couldn't do with your regular ID.

The advantage this provides is that while passwords usually remain valid for months at a time, the TGT is good only for a fairly short period, typically eight or ten hours. Afterwards, the TGT is not usable by anyone, including the user or any attacker. This TGT, as well as any tickets that you obtain using it, are stored in the credentials cache.

The term "credentials" actually refers to both the ticket and the session key in conjunction. However, you will often see the terms "ticket cache" and "credentials cache" used more or less interchangeably.

Cross-Realm Authentication

So far, we've considered the case where there is a single AS and a single TGS, which may or may not reside on the same machine. As long as the number of requests is small, this is not a problem. But as the network grows, the number of requests grows with it, and the AS/TGS becomes a bottleneck in the authentication process. In short, this system doesn't scale. For this reason, it often makes sense to divide the world into distinct realms. These divisions are often made on organizational boundaries, although they need not be. Each realm has its own AS and TGS.

To allow for cross-realm authentication--that is, to allow users in one realm to access services in another--it is necessary first for the user's realm to register a remote TGS (RTGS) in the service's realm. Such an association typically (but not always) goes both ways, so that each realm has an RTGS in the other realm. This now adds a new layer of indirection to the authentication procedure: First the user contacts the AS to access the TGS. Then the TGS is contacted to access the RTGS. Finally, the RTGS is contacted to access the end service.

Actually, it can be worse than that. In some cases, where there are many realms, it is inefficient to register each realm in every other realm. Instead, there is a network of realms, so that in order to contact a service in another realm, it is sometimes necessary to contact the RTGS in one or more intermediate realms. These realms are called the transited realms, and their names are recorded in the ticket. This is so the end service knows all of the intermediate realms that were transited, and can decide whether or not to accept the authentication. (It might not, for instance, if it believes one of the intermediate realms is not trustworthy.)

This feature is new to Kerberos in Version 5. In Version 4, only peer-to-peer cross-realm authentication was permitted. In principle, the Version 5 approach allows for better scaling if an efficient hierarchy of realms is set up; in practice, realms exhibit significant locality, and they mostly use peer-to-peer cross-realm authentication anyway. However, the advent of public-key cryptography for the initial authentication step (for which the certificate chain is recorded in the ticket as transited "realms") may again justify the inclusion of this mechanism.

Kerberos and Public-Key Cryptography

As I mentioned earlier, Kerberos relies on conventional or symmetric cryptography, in which the keys used for encryption and decryption are the same. As a result, the key must be kept secret between the user and the KDC, since if anyone else knew it, they could impersonate the user to any service. What's more, in order for a user to use Kerberos at all, he or she must already be registered with a KDC.

Such a requirement can be circumvented with the use of public-key cryptography, in which there are two separate keys, a public key and a private key. These two keys are conjugates: Whatever one key encrypts, the other decrypts. As their names suggest, the public key is intended to be known by anyone, whereas the private key is known only by the user; not even the KDC is expected to know the private key.

Public-key cryptography can be integrated into the Kerberos initial authentication phase in a simple way--in principle, at least. When the KDC (that is, the AS) generates its response, encapsulating the session key for the TGT, it does not encrypt it with the user's long-term key (which doesn't exist). Rather, it encrypts it with a randomly generated key, which is in turn encrypted with the user's public key. The only key that can reverse this public-key encryption is the user's private key, which only he or she knows. The user thus obtains the random key, which is in turn used to decrypt the session key, and the rest of the authentication (for instance, any exchanges with the TGS) proceeds as before.

You may well wonder why a randomly generated key must be used. Why not simply encrypt the session key with the user's public key? To begin with, public-key operations are not designed to operate on arbitrary data, which might be any length; they are designed to operate on keys, which are short. Public-key cryptography is a relatively expensive operation. So when you make a call to a library routine to encrypt anything, no matter how long it is, it first encrypts it using symmetric cryptography with a randomly generated key, and then encrypts that random key with the public key.

Even though we've been referring just to the session key, Kerberos actually encapsulates a number of other items along with it. As a result, the performance of public-key cryptography becomes a direct factor.

There's a catch. (Of course, there had to be.) The catch is that even though the user and the KDC don't have to share a long-term key, they do have to share some kind of association. Otherwise, the KDC has no confidence that the public key the user is asking it to use belongs to any given identity. I could easily generate a public and a private key that go together, and assert that they belong to you, and present them to the KDC to impersonate you. To prevent that, public keys have to be certified. Some certification authority, or CA, must digitally sign the public key. In essence, the CA encrypts the user's public key and identity with its private key, which binds the two together. Typically, the CA is someone that is trusted generally to do this very thing. Afterward, anyone can verify that the CA did indeed sign the user's public key and identity by decrypting it with the CA's public key. (See how clever the uses of the two complementary keys can be?)

In reality, the CA doesn't encrypt the user's public key with its private key, for the same reasons that the KDC doesn't encrypt the session key with the user's public key. Nor does it encrypt it first with a random key, since the user's public key and identity don't have to be kept confidential. Instead, it passes the public key and identity through a special function called a one-way hash. The hash (sometimes called a message digest) outputs a random-looking short sequence of bytes, and it's these bytes that are encrypted by the CA's private key. This establishes that only the CA could have bound the public key to the user's identity, since you can't just create any other message that also hashes to those same bytes (that's why the hash is called one-way).

You may see a potential problem: How does the KDC know that the key that signed the user's public key belongs to the CA? Doesn't someone else need to sign the CA's key--a higher-level CA perhaps? This could lead to an infinite recursion. At some point, however, the KDC must, by some other means, establish a CA's identity outside of digitally signing things, and know that a public key definitely belongs to it. This terminates the chain of certificates, starting from the user's public key and ending in the trusted CA's public key, and it is that trusted CA that represents the association shared by the user and the KDC. The advantage over sharing a long-term key is that the various authorities don't actually have to be on-line for consultation while the KDC is authenticating the user.

Incidentally, if you've used PGP (or GPG), which also employs public-key cryptography, you may have noticed that you have to enter a password or passphrase before being able to use your private key. That passphrase does not, however, actually generate the private key, which is instead generated only once, at the same time the public key is created. Rather, the passphrase is used to generate a symmetric key (just as in Kerberos), and that symmetric key is used to encrypt the private key, so that no one can snoop onto your machine and use it. Whenever you do want to use it, you have to enter the same passphrase, which generates the same symmetric key, and your private key is decrypted long enough to use it. (After you're done with it, any program that's correctly written will wipe the decrypted private key out of memory.)

Additional Information

Some time ago, I gave a talk about Kerberos, from a more historical perspective. Here is an essay that is chiefly drawn from that talk.

I remember, as a child, going to the Exploratorium in San Francisco. The Exploratorium is a hands-on science museum for kids and adults, where you can investigate such disparate science topics as the motion of objects in a gravitational field, the effect of darkness and delay on three-dimensional vision, how engines work, and so forth. It's still there, and a great way to spend a day.

One of the exhibits they had was a sort of primitive pinball machine, with only one bumper. This bumper had an unknown shape -- at least, it was unknown to the user at first -- because it was hidden underneath a circular mask, which covered it completely. However, you could fire pinballs up at the bumper, which would strike it underneath the mask, and then bounce back every which way. You could slide the plunger back and forth along the bottom edge of the frame, and also rotate the bumper, so that you could strike the bumper from different positions and angles.

The purpose of all this firing and bouncing was to try to determine the shape of the bumper. You couldn't see the actual bumper, you couldn't get at it to take it apart; all you could do was fire pinballs up at it and observe which way they bounced. They had a whole bunch of these machines; eventually, you could piece together that the shape was a star, or a hexagon, or a triangle, or whatever it was. Sometimes you would decide right away that it was a square, and all of a sudden one of the pinballs, fired just right, would bounce back in a completely unexpected direction. You would have to play around a little longer before you realized that the shape was really a cross.

That, to me, is the essence of science. You practically never have full and complete access to the internal workings of whatever it is you're trying to figure out. Either it's too small, or too old and decayed, or too hard to get at; it's always too something to look at directly and figure out what's going on. You're always limited in how you can manipulate the darn thing. And yet, even using a limited tool set, you can arrive at a more complete picture of how it works.

In astronomy, of course, the problem is that everything is too far away. Only in this century has it become feasible to travel to the moon and planets, and even then we've only been able to bring back samples from the moon for first-hand observation and analysis. And although it now appears conceivable to one day travel to the stars, it's hardly out of the question that we still won't be able to go in and take core samples. How, then, do we figure out what's in the stars, and what makes them shine?

For generations upon generations, human observers have looked up at the stars and been unable to see anything more distinct than dots of light. It might have been possible for eagle-eyed humans to see the discs of Venus or Jupiter, to be sure, but even the ancient Greeks knew that these were planets and not fixed stars, so of course they were different. The stars remained infinitesimal points of light.

Even after Galileo first brought a telescope to bear on the stars (and no, he did not invent it), that's all anyone ever saw: points of light. Galileo's telescopes were so inferior by today's standards that he was unable to distinguish Saturn's rings, something that practically any dime-store telescope nowadays can do. Anyone who saw the stars as finite in extent soon discovered that their telescope was in error, and that they had not actually seen the surface of a star. Centuries passed, telescopes improved in both resolution and light-gathering power -- and points of light the stars steadfastly remained.

In the early 19th century, the French philosopher Auguste Comte (1798-1857) was trying to come up with an example of something that would forever be unattainable. The inability to resolve even one star other than our own sun led him to conclude that we would never be able to determine the composition of the stars. We would never be able to sample them physically, he argued, and there seemed no other way to analyze them, since no one could see the surface of any star, other than our own sun:

To attain a true idea of the nature and composition of [astronomy], it is indispensable. . .to mark the boundaries of the positive knowledge that we are able to gain of the stars. . . .We can never by any means investigate their chemical composition. . . .The positive knowledge we can have of the stars is limited solely to their geometrical and mechanical properties. [Cours de philosophie positive, 1842]

By "mechanical properties," Comte meant the movement of the stars through space, not their internal workings. As it sometimes goes, though, it took only a couple of years after Comte died for him to be proven wrong, and it didn't in fact require travel to the stars. Since Isaac Newton (1642-1727), it had been known that white light, such as light from the sun, could be separated into its constituent colors using a prism. This spectrum, as it came to be called, appeared to be continuous -- that is, there were no gaps anywhere where colors might be missing.

By the 19th century, however, the Bavarian optician Joseph von Fraunhofer (1787-1826) had discovered that the spectrum was not, in fact, completely continuous, but instead had little gaps here and there. These gaps are now called Fraunhofer lines in his honor. Then, in 1859, the German physicist Gustav Kirchhoff (1824-1887) discovered that when you heated certain minerals, the light emitted by the resulting flame wasn't a smooth and complete spectrum, as was light from the sun. Instead, it was a collection of small bands of light, each in its proper position along the spectrum, but in isolation.

Could it be that the gaps and the light bands were related in some way? When Kirchhoff looked at a pair of gaps in the solar spectrum, near the orange portion, he noticed that they seemed to be in exactly the same position as the two lines emitted by burning sodium. To test their similarity, he decided to pass sunlight through burning sodium vapor, and then subject the combined light to a prism, expecting to see the emission lines of the sodium vapor fill in the gaps in the solar spectrum. To Kirchhoff's great surprise, they did no such thing. If anything, the gaps became even darker and more distinct than they had been without the sodium vapor. Were the emission lines of the sodium vapor just off a bit from the gaps in the solar spectrum, and somehow mysteriously drawing further light from those gaps?

It took a few experiments more before Kirchhoff discovered what was going on. The burning sodium vapor was in fact emitting the same lines as were missing in the solar spectrum. However, when light from one source (the sun) passes through another light source cooler than the first (the burning sodium), the second source absorbs precisely the same lines it would emit by itself. In recognition of Kirchhoff's investigations into gas emissions and absorptions, this whole business about cooler gases absorbing and hotter gases emitting is known as Kirchhoff's Law. In this way, he had proven Comte wrong, just two years after the latter's death.

Soon after, in 1862, the Swedish physicist Anders Ångstrom (1814-1874) examined the solar spectrum in much finer detail than Kirchhoff had, and discovered hydrogen in the sun. Ångstrom measured the position of the hydrogen lines (and a whole host of others) to a precision of one ten-billionth of a meter. In his honor, that unit of length is now named after him, and you will still see, occasionally, light wavelengths measured in Ångstroms. (The appropriate official metric unit is the nanometer, which is a billionth of a meter.)

He had to measure it so finely, because, as it turns out, hydrogen produces relatively faint absorption lines, even though it constitutes most of the matter in the sun (and, indeed, the universe). The sun, we now know, is about 75 percent hydrogen. The remainder is mostly an element that was first discovered six years later when the French astronomer Pierre Janssen (1824-1907) detected absorption lines that didn't appear to correspond to any known element. The British astronomer Joseph Lockyer (1836-1920) examined Janssen's data, agreed that the element indicated was a previously unknown one, and named the newly discovered substance helium, after the Greek word for "sun."

Hydrogen and helium are the two lightest elements, which explains why the earth has so little of them. The earth's gravity is simply insufficiently strong to hold onto hydrogen and helium -- it can only hold onto heavier gases, such as nitrogen, oxygen, carbon dioxide, and so forth. Hydrogen is at least reasonably reactive, so that it forms lots of compounds that the earth can hold onto, such as methane and ammonia and water. Helium, on the other hand, is an inert gas; it tends to "ignore" other elements, and the only reason that we have any amount of it for blowing up zeppelins and party balloons is that it is a byproduct of the decay of radioactive elements. (There's an interesting story about how helium was first discovered on the earth, but that's a question for another time.) When a party balloon loses its helium, that helium dissipates into the atmosphere, where it eventually finds its way outward into space and is lost forever. The sun, on the other hand, is so massive that although it is very hot, so that the hydrogen and helium atoms are jostling about very rapidly, its gravity is more than sufficient to keep those atoms from escaping.

So, the sun is made of about three-quarters hydrogen and one-quarter helium, and a tiny smattering of other elements. If we mix those gases here on earth, we certainly don't get the sun. We could light it up, and then the mixture would likely go up in a blaze of fire (provided enough oxygen were present), but it would last only for a brief time. How then does the sun stay lit?

People have been wondering about that for a long time. The earthly activity most like the shining of the sun is clearly the burning of fire. Here, of course, fire is dynamic, local in effect, and temporary in duration, whereas the sun seems steady and wide-ranging, and as far as anyone can tell, it's been shining essentially forever. Nevertheless, this was a reasonable analogy on the face of it, so before we knew what the sun was made of, the German physician Julius Mayer (1814-1878) calculated that if the sun were essentially a humongous lump of coal, there was enough fuel there to last about 5,000 years. Other substances could be substituted in place of coal, but none of them, in fact no chemical phenomenon at all, could be relied on to produce the sun's level of energy production for more than several thousand years, and in any case the soon-to-be-discovered composition of the sun made most of these infeasible to begin with.

Mayer himself then proposed another explanation. Perhaps, he suggested, meteoroids and other space debris were attracted continually by the sun's gravity and fell into the sun, in the process generating heat. It had been known for some time that a moving object carries kinetic energy according to the equation:

KE = (mv^2)/2

where m is the object's mass and v is its velocity.

When an object comes to a stop, that kinetic energy cannot simply vanish. It has to be transferred elsewhere, and the "lowest common denominator" form of energy is heat. This explains why a hammer gets hot after striking a lot of nails -- because the hammer's motion is impeded by the nails (or, if you're unlucky, your thumb), and the kinetic energy of the hammer's bulk motion is transferred to the kinetic energy of the molecules in the hammer (and the nails and your thumb, too, for that matter). Heat, in other words. Something similar might be happening with the sun, and if so, maybe that was keeping the sun lit.

Mayer's proposal had the advantage of explaining how the sun could continue shining indefinitely, since as far as anyone knew, there could be a limitless supply of space debris. However, the Irish physicist William Thomson (1824-1907), later Lord Kelvin, discovered another problem with Mayer's proposal. When space debris collides with the sun, its kinetic energy is turned into heat and that can be radiated away. However, the amount of kinetic energy, according to the above formula, depends not only on the velocity of the debris but on its mass as well, and that mass cannot be radiated away. It has to stay in the sun's bowels. Thomson calculated the rate of mass that would have fall into the sun to support its current power output, and determined that the sun would have to "gain weight" so quickly that its gravitational force would have increased and the orbit of the earth would have shrunk measurably in historic times. Since no such shrinking had ever been detected, Mayer's proposal had to be rejected.

Well, if falling objects couldn't keep the sun lit, perhaps the sun was falling in on itself. The German philospher Immanuel Kant (1724-1804) had earlier proposed the nebular hypothesis of the sun's formation, which theorized that the sun and the rest of the solar system condensed out of a great cloud of gas and dust. If Kant was right and the sun had indeed collapsed from a cloud of gas and dust, maybe that condensation itself was sufficient to power the sun. That way, the sun could heat up without gaining mass and changing the earth's orbit in any way. The German physicist Hermann von Helmholtz (1821-1894) calculated that under reasonable assumptions, this process was enough to keep the sun radiating at its current power for several million years. Unfortunately, by this time, the earth was known (from geological and biological lines of reasoning) to be much older than this.

The crucial clue came in the form of the famous theory of general relativity, developed by the Swiss physicist Albert Einstein (1879-1955). In it, Einstein developed his famous equation,

E = mc^2

which tells us that any mass -- a bit of starstuff, an automobile, dryer lint -- can be transformed into an astounding amount of energy. The British astronomer Arthur Eddington (1882-1944) suggested that the sun had so much hydrogen and helium because it was turning the hydrogen into helium. Helium was available in small quantities to study in the laboratory, and it had been discovered that a helium atom weighed almost but not quite as much as four hydrogen atoms; the precise ratio was closer to 3.97 to 1. Perhaps, Eddington mused, the missing mass was being transformed into energy in the form of light and heat.

Eddington quickly convinced himself and others that the amount of energy liberated by this transformation was enough to power the sun, at least in principle. The sticking point was how, exactly, four hydrogen atoms would come together to form a single helium atom. Atoms are electrically neutral, so they have no problem at all bumping into each other -- all the more so in the hot interior of the sun, which Eddington calculated to be 40 million degrees Kelvin. However, in order to fuse four hydrogen atoms into a helium atom, it is the atomic nuclei that have to come into contact, and those are positively charged and instantly repel each other. If the hydrogen atoms are moving fast enough, the repulsion can be overcome and the atoms will fuse, but there is a catch: in order to fuse more than a trivial amount of hydrogen, the temperature had to be much higher than Eddington calculated -- more like tens of billions of degrees Kelvin. Such a hot sun is incompatible with the sun's current size; at those temperatures, the sun should be much more bloated than it is. Eddington was convinced therefore that hydrogen was somehow fusing instead at the much colder temperature of 40 million degrees; he wrote in 1927,

We do not argue with the critic who urges that the stars are not hot enough for this process; we tell him to go and find a hotter place.

However, physicists could see no way around the electric repulsion problem.

As it turned out, however, there was a way. Just about the time that Eddington was fighting electric repulsion, the young Russian physicist George Gamov (1904-1968) showed that quantum mechanics explained how atomic nuclei could split apart in nuclear fission, even though the nuclear force holding the nucleus together was technically too strong to allow this to happen. The Welsh astronomer Robert Atkinson (1898-1982) and German physicist Fritz Houtermans (1903-1966) determined that the same mechanism could permit hydrogen nuclei to come together, even though the electromagnetic force holding them apart was technically too strong to allow this to happen, and they wrote this up in a paper in 1929. At last, the primary energy source of the stars was understood.

Let us recap, then. Stars derive their energy from fusion. In the process of fusing into a single atom of helium, four hydrogen atoms lose some mass; this mass is transformed into energy. The hydrogen atoms do not all come together at once -- even quantum mechanics does not allow this in the sun. Instead, the sun fuses hydrogen in two main ways:

1. In one process, a carbon atom "swallows" the four hydrogen atoms, one by one, emitting bits of energy along the way and becoming in turn different forms of nitrogen and oxygen, until it "burps out" the helium at the end and returns to its original carbon self. This is known as the carbon cycle.

2. In the second process, two hydrogens come together to form deuterium, a heavy form of hydrogen weighing about twice as much; this then collides with another hydrogen to form helium-3, a lighter form of helium which weighs -- you guessed it -- three times as much as ordinary hydrogen; finally, two helium-3 atoms collide to yield an ordinary helium atom and two hydrogen atoms. Those two hydrogen atoms are then available to be fused. This is known as the proton-proton chain.

Which process is more dominant in any given star depends on that star's temperature. For our own star and other cooler stars, the proton-proton chain is dominant. Stars much hotter than our own produce most of their energy via the carbon cycle.

The advent of computers made it possible to simulate stars of different mass and watch their progress in reasonable times (by human standards). Several billion years of stellar evolution could be compressed at first into weeks, then days, and then hours. Out of these simulations came the conclusion that more massive stars burn their hydrogen fuel much more quickly than less massive ones. Even though more massive stars have more hydrogen to begin with, they also burn hotter, and go through their hydrogen supply that much faster. The sun has enough hydrogen for about 11 billion years of uninterrupted fusion. Much larger stars -- say, those of about 50 times the sun's mass -- run out of hydrogen in only a few million years. On the other hand, much smaller stars, perhaps as small as a tenth of the sun's mass, run out of their hydrogen supply only after trillions of years. All of the stars this small are still burning steadily, because there hasn't been enough time since the universe was born for them to burn out.

Nova Cygni 1992

But what happens after the hydrogen runs out? What then? Do they just fizzle out and gradually cool down, as many astronomers suspected? Or does something more fantastic happen? When the earliest stellar models were run through the computers, the simulations didn't yield anything like what we believed to be aging stars. Nor did they yield any kind of nonsensical result. They refused to yield anything at all; once the fusible hydrogen was depleted, the computers were unable to proceed further. There was hydrogen further out in the star, but only where it was so cool that fusion couldn't take place. It would take a completely new simulation technique to carry the life story of stars forward into death.

In my next essay, then, I'll talk about what happens when stars run out of hydrogen.

Reader Comments

Brian Tung is a computer scientist by day and avid amateur astronomer by night. He is an active member of the Los Angeles Astronomical Society and runs his own astronomy Web site.