Hamming initially wanted to study engineering, but money was scarce during the Great Depression, and the only scholarship offer he received came from the University of Chicago, which had no engineering school. Instead, he became a science student, majoring in mathematics,[3] and received his Bachelor of Science degree in 1937.[1] He later considered this a fortunate turn of events. "As an engineer," he said, "I would have been the guy going down manholes instead of having the excitement of frontier research work."[1]

Shortly before the first field test (you realize that no small scale experiment can be done—either you have a critical mass or you do not), a man asked me to check some arithmetic he had done, and I agreed, thinking to fob it off on some subordinate. When I asked what it was, he said, "It is the probability that the test bomb will ignite the whole atmosphere." I decided I would check it myself! The next day when he came for the answers I remarked to him, "The arithmetic was apparently correct but I do not know about the formulas for the capture cross sections for oxygen and nitrogen—after all, there could be no experiments at the needed energy levels." He replied, like a physicist talking to a mathematician, that he wanted me to check the arithmetic not the physics, and left. I said to myself, "What have you done, Hamming, you are involved in risking all of life that is known in the Universe, and you do not know much of an essential part?" I was pacing up and down the corridor when a friend asked me what was bothering me. I told him. His reply was, "Never mind, Hamming, no one will ever blame you."[5]

Hamming remained at Los Alamos until 1946, when he accepted a post at the Bell Telephone Laboratories (BTL). For the trip to New Jersey, he bought Klaus Fuchs's old car. When he later sold it just weeks before Fuchs was unmasked as a spy, the FBI regarded the timing as suspicious enough to interrogate Hamming.[2] Although Hamming described his role at Los Alamos as being that of a "computer janitor",[6] he saw computer simulations of experiments that would have been impossible to perform in a laboratory. "And when I had time to think about it," he later recalled, "I realized that it meant that science was going to be changed".[1]

At the Bell Labs Hamming shared an office for a time with Claude Shannon. The Mathematical Research Department also included John Tukey and Los Alamos veterans Donald Ling and Brockway McMillan. Shannon, Ling, McMillan and Hamming came to call themselves the Young Turks.[3] "We were first-class troublemakers," Hamming later recalled. "We did unconventional things in unconventional ways and still got valuable results. Thus management had to tolerate us and let us alone a lot of the time."[1]

Although Hamming had been hired to work on elasticity theory, he still spent much of his time with the calculating machines.[6] Before he went home on one Friday in 1947, he set the machines to perform a long and complex series of calculations over the weekend, only to find when he arrived on Monday morning that an error had occurred early in the process and the calculation had errored off.[7] Digital machines manipulated information as sequences of zeroes and ones, units of information that Tukey would christen "bits".[8] If a single bit in a sequence was wrong, then the whole sequence would be. To detect this, a parity bit was used to verify the correctness of each sequence. "If the computer can tell when an error has occurred," Hamming reasoned, "surely there is a way of telling where the error is so that the computer can correct the error itself."[7]

Hamming set himself the task of solving this problem,[2] which he realised would have an enormous range of applications. Each bit can only be a zero or a one, so if you know which bit is wrong, then it can be corrected. In a landmark paper published in 1950, he introduced a concept of the number of positions in which two code words differ, and therefore how many changes are required to transform one code word into another, which is today known as the Hamming distance.[9] Hamming thereby created a family of mathematical error-correcting code, which are called Hamming codes. This not only solved an important problem in telecommunications and computer science, it opened up a whole new field of study.[9][10]

The Hamming bound, also known as the sphere-packing or volume bound is a limit on the parameters of an arbitrary block code. It is from an interpretation in terms of sphere packing in the Hamming distance into the space of all possible words. It gives an important limitation on the efficiency with which any error-correcting code can utilize the space in which its code words are embedded. A code which attains the Hamming bound is said to be a perfect code. Hamming codes are perfect codes.[11][12]

Returning to differential equations, Hamming studied means of numerically integrating them. A popular approach at the time was Milne's Method, attributed to Arthur Milne.[13] This had the drawback of being unstable, so that under certain conditions the result could be swamped by roundoff noise. Hamming developed an improved version, the Hamming predictor-corrector. This was in use for many years, but has since been superseded by the Adams method.[14] He did extensive research into digital filters, devising a new filter, the Hamming window, and eventually writing an entire book on the subject, Digital Filters (1977).[15]

During the 1950s, he programmed one of the earliest computers, the IBM 650, and with Ruth A. Weiss developed the L2 programming language, one of the earliest computer languages, in 1956. It was widely used within the Bell Labs, and also by external users, who knew it as Bell 2. It was superseded by Fortran when the Bell Labs' IBM 650 were replaced by the IBM 704 in 1957.[16]

In A Discipline of Programming (1967), Edsger Dijkstra attributed to Hamming the problem of efficiently finding regular numbers.[17] The problem became known as "Hamming's problem", and the regular numbers are often referred to as Hamming numbers in Computer Science, although he did not discover them.[18]

Throughout his time at Bell Labs, Hamming avoided management responsibilities. He was promoted to management positions several times, but always managed to make these only temporary. "I knew in a sense that by avoiding management," he later recalled, "I was not doing my duty by the organization. That is one of my biggest failures."[1]

Hamming served as president of the Association for Computing Machinery from 1958 to 1960.[6] In 1960, he predicted that one day half of the Bell Lab's budget would be spent on computing. None of his colleagues thought that it would ever be so high, but his forecast actually proved to be too low.[19] His philosophy on scientific computing appeared as the motto of his Numerical Methods for Scientists and Engineers (1962):

In later life, Hamming became interested in teaching. Between 1960 and 1976, when he left the Bell labs, he held visiting or adjunct professorships at Stanford University, Stevens Institute of Technology, the City College of New York, the University of California at Irvine and Princeton University.[21] As a Young Turk, Hamming had resented older scientists who had used up space and resources that would have been put to much better use by the young Turks. Looking at a commemorative poster of the Bell Labs' valued achievements, he noted that he had worked on or been associated with nearly all of those listed in the first half of his career at Bell Labs, but none in the second. He therefore resolved to retire in 1976, after thirty years.[1]

The way mathematics is currently taught it is exceedingly dull. In the calculus book we are currently using on my campus, I found no single problem whose answer I felt the student would care about! The problems in the text have the dignity of solving a crossword puzzle – hard to be sure, but the result is of no significance in life.[3]

Hamming attempted to rectify the situation with a new text, Methods of Mathematics Applied to Calculus, Probability, and Statistics (1985).[3] In 1993, he remarked that "when I left BTL, I knew that that was the end of my scientific career. When I retire from here, in another sense, it's really the end."[1] And so it proved. He became Professor Emeritus in June 1997,[22] and delivered his last lecture in December 1997, just a few weeks before his death from a heart attack on January 7, 1998.[6] He was survived by his wife Wanda.[22]

Unconventional introductory textbook which attempts to both teach calculus and give some idea of what it is good for at the same time. Might be of special interest to someone teaching an introductory calculus course using a conventional textbook in order to pick up some new pedagogical viewpoints.