Newsletters

Christopher J. Tassava

For the United States, World War II and the Great Depression constituted the most important economic event of the twentieth century. The war’s effects were varied and far-reaching. The war decisively ended the depression itself. The federal government emerged from the war as a potent economic actor, able to regulate economic activity and to partially control the economy through spending and consumption. American industry was revitalized by the war, and many sectors were by 1945 either sharply oriented to defense production (for example, aerospace and electronics) or completely dependent on it (atomic energy). The organized labor movement, strengthened by the war beyond even its depression-era height, became a major counterbalance to both the government and private industry. The war’s rapid scientific and technological changes continued and intensified trends begun during the Great Depression and created a permanent expectation of continued innovation on the part of many scientists, engineers, government officials and citizens. Similarly, the substantial increases in personal income and frequently, if not always, in quality of life during the war led many Americans to foresee permanent improvements to their material circumstances, even as others feared a postwar return of the depression. Finally, the war’s global scale severely damaged every major economy in the world except for the United States, which thus enjoyed unprecedented economic and political power after 1945.

The Great Depression

The global conflict which was labeled World War II emerged from the Great Depression, an upheaval which destabilized governments, economies, and entire nations around the world. In Germany, for instance, the rise of Adolph Hitler and the Nazi party occurred at least partly because Hitler claimed to be able to transform a weakened Germany into a self-sufficient military and economic power which could control its own destiny in European and world affairs, even as liberal powers like the United States and Great Britain were buffeted by the depression.

In the United States, President Franklin Roosevelt promised, less dramatically, to enact a “New Deal” which would essentially reconstruct American capitalism and governance on a new basis. As it waxed and waned between 1933 and 1940, Roosevelt’s New Deal mitigated some effects of the Great Depression, but did not end the economic crisis. In 1939, when World War II erupted in Europe with Germany’s invasion of Poland, numerous economic indicators suggested that the United States was still deeply mired in the depression. For instance, after 1929 the American gross domestic product declined for four straight years, then slowly and haltingly climbed back to its 1929 level, which was finally exceeded again in 1936. (Watkins, 2002; Johnston and Williamson, 2004)

Unemployment was another measure of the depression’s impact. Between 1929 and 1939, the American unemployment rate averaged 13.3 percent (calculated from “Corrected BLS” figures in Darby, 1976, 8). In the summer of 1940, about 5.3 million Americans were still unemployed — far fewer than the 11.5 million who had been unemployed in 1932 (about thirty percent of the American workforce) but still a significant pool of unused labor and, often, suffering citizens. (Darby, 1976, 7. For somewhat different figures, see Table 3 below.)

In spite of these dismal statistics, the United States was, in other ways, reasonably well prepared for war. The wide array of New Deal programs and agencies which existed in 1939 meant that the federal government was markedly larger and more actively engaged in social and economic activities than it had been in 1929. Moreover, the New Deal had accustomed Americans to a national government which played a prominent role in national affairs and which, at least under Roosevelt’s leadership, often chose to lead, not follow, private enterprise and to use new capacities to plan and administer large-scale endeavors.

Preparedness and Conversion

As war spread throughout Europe and Asia between 1939 and 1941, nowhere was the federal government’s leadership more important than in the realm of “preparedness” — the national project to ready for war by enlarging the military, strengthening certain allies such as Great Britain, and above all converting America’s industrial base to produce armaments and other war materiel rather than civilian goods. “Conversion” was the key issue in American economic life in 1940-1942. In many industries, company executives resisted converting to military production because they did not want to lose consumer market share to competitors who did not convert. Conversion thus became a goal pursued by public officials and labor leaders. In 1940, Walter Reuther, a high-ranking officer in the United Auto Workers labor union, provided impetus for conversion by advocating that the major automakers convert to aircraft production. Though initially rejected by car-company executives and many federal officials, the Reuther Plan effectively called the public’s attention to America’s lagging preparedness for war. Still, the auto companies only fully converted to war production in 1942 and only began substantially contributing to aircraft production in 1943.

Even for contemporary observers, not all industries seemed to be lagging as badly as autos, though. Merchant shipbuilding mobilized early and effectively. The industry was overseen by the U.S. Maritime Commission (USMC), a New Deal agency established in 1936 to revive the moribund shipbuilding industry, which had been in a depression since 1921, and to ensure that American shipyards would be capable of meeting wartime demands. With the USMC supporting and funding the establishment and expansion of shipyards around the country, including especially the Gulf and Pacific coasts, merchant shipbuilding took off. The entire industry had produced only 71 ships between 1930 and 1936, but from 1938 to 1940, commission-sponsored shipyards turned out 106 ships, and then almost that many in 1941 alone (Fischer, 41). The industry’s position in the vanguard of American preparedness grew from its strategic import — ever more ships were needed to transport American goods to Great Britain and France, among other American allies — and from the Maritime Commission’s ability to administer the industry through means as varied as construction contracts, shipyard inspectors, and raw goading of contractors by commission officials.

Many of the ships built in Maritime Commission shipyards carried American goods to the European allies as part of the “Lend-Lease” program, which was instituted in 1941 and provided another early indication that the United States could and would shoulder a heavy economic burden. By all accounts, Lend-Lease was crucial to enabling Great Britain and the Soviet Union to fight the Axis, not least before the United States formally entered the war in December 1941. (Though scholars are still assessing the impact of Lend-Lease on these two major allies, it is likely that both countries could have continued to wage war against Germany without American aid, which seems to have served largely to augment the British and Soviet armed forces and to have shortened the time necessary to retake the military offensive against Germany.) Between 1941 and 1945, the U.S. exported about $32.5 billion worth of goods through Lend-Lease, of which $13.8 billion went to Great Britain and $9.5 billion went to the Soviet Union (Milward, 71). The war dictated that aircraft, ships (and ship-repair services), military vehicles, and munitions would always rank among the quantitatively most important Lend-Lease goods, but food was also a major export to Britain (Milward, 72).

Pearl Harbor was an enormous spur to conversion. The formal declarations of war by the United States on Japan and Germany made plain, once and for all, that the American economy would now need to be transformed into what President Roosevelt had called “the Arsenal of Democracy” a full year before, in December 1940. From the perspective of federal officials in Washington, the first step toward wartime mobilization was the establishment of an effective administrative bureaucracy.

War Administration

From the beginning of preparedness in 1939 through the peak of war production in 1944, American leaders recognized that the stakes were too high to permit the war economy to grow in an unfettered, laissez-faire manner. American manufacturers, for instance, could not be trusted to stop producing consumer goods and to start producing materiel for the war effort. To organize the growing economy and to ensure that it produced the goods needed for war, the federal government spawned an array of mobilization agencies which not only often purchased goods (or arranged their purchase by the Army and Navy), but which in practice closely directed those goods’ manufacture and heavily influenced the operation of private companies and whole industries.

Though both the New Deal and mobilization for World War I served as models, the World War II mobilization bureaucracy assumed its own distinctive shape as the war economy expanded. Most importantly, American mobilization was markedly less centralized than mobilization in other belligerent nations. The war economies of Britain and Germany, for instance, were overseen by war councils which comprised military and civilian officials. In the United States, the Army and Navy were not incorporated into the civilian administrative apparatus, nor was a supreme body created to subsume military and civilian organizations and to direct the vast war economy.

Instead, the military services enjoyed almost-unchecked control over their enormous appetites for equipment and personnel. With respect to the economy, the services were largely able to curtail production destined for civilians (e.g., automobiles or many non-essential foods) and even for war-related but non-military purposes (e.g., textiles and clothing). In parallel to but never commensurate with the Army and Navy, a succession of top-level civilian mobilization agencies sought to influence Army and Navy procurement of manufactured goods like tanks, planes, and ships, raw materials like steel and aluminum, and even personnel. One way of gauging the scale of the increase in federal spending and the concomitant increase in military spending is through comparison with GDP, which itself rose sharply during the war. Table 1 shows the dramatic increases in GDP, federal spending, and military spending.

Preparedness Agencies

To oversee this growth, President Roosevelt created a number of preparedness agencies beginning in 1939, including the Office for Emergency Management and its key sub-organization, the National Defense Advisory Commission; the Office of Production Management; and the Supply Priorities Allocation Board. None of these organizations was particularly successful at generating or controlling mobilization because all included two competing parties. On one hand, private-sector executives and managers had joined the federal mobilization bureaucracy but continued to emphasize corporate priorities such as profits and positioning in the marketplace. On the other hand, reform-minded civil servants, who were often holdovers from the New Deal, emphasized the state’s prerogatives with respect to mobilization and war making. As a result of this basic division in the mobilization bureaucracy, “the military largely remained free of mobilization agency control” (Koistinen, 502).

War Production Board

In January 1942, as part of another effort to mesh civilian and military needs, President Roosevelt established a new mobilization agency, the War Production Board, and placed it under the direction of Donald Nelson, a former Sears Roebuck executive. Nelson understood immediately that the staggeringly complex problem of administering the war economy could be reduced to one key issue: balancing the needs of civilians — especially the workers whose efforts sustained the economy — against the needs of the military — especially those of servicemen and women but also their military and civilian leaders.

Though neither Nelson nor other high-ranking civilians ever fully resolved this issue, Nelson did realize several key economic goals. First, in late 1942, Nelson successfully resolved the so-called “feasibility dispute,” a conflict between civilian administrators and their military counterparts over the extent to which the American economy should be devoted to military needs during 1943 (and, by implication, in subsequent war years). Arguing that “all-out” production for war would harm America’s long-term ability to continue to produce for war after 1943, Nelson convinced the military to scale back its Olympian demands. He thereby also established a precedent for planning war production so as to meet most military and some civilian needs. Second (and partially as a result of the feasibility dispute), the WPB in late 1942 created the “Controlled Materials Plan,” which effectively allocated steel, aluminum, and copper to industrial users. The CMP obtained throughout the war, and helped curtail conflict among the military services and between them and civilian agencies over the growing but still scarce supplies of those three key metals.

Office of War Mobilization

By late 1942 it was clear that Nelson and the WPB were unable to fully control the growing war economy and especially to wrangle with the Army and Navy over the necessity of continued civilian production. Accordingly, in May 1943 President Roosevelt created the Office of War Mobilization and in July put James Byrne — a trusted advisor, a former U.S. Supreme Court justice, and the so-called “assistant president” — in charge. Though the WPB was not abolished, the OWM soon became the dominant mobilization body in Washington. Unlike Nelson, Byrnes was able to establish an accommodation with the military services over war production by “acting as an arbiter among contending forces in the WPB, settling disputes between the board and the armed services, and dealing with the multiple problems” of the War Manpower Commission, the agency charged with controlling civilian labor markets and with assuring a continuous supply of draftees to the military (Koistinen, 510).

Beneath the highest-level agencies like the WPB and the OWM, a vast array of other federal organizations administered everything from labor (the War Manpower Commission) to merchant shipbuilding (the Maritime Commission) and from prices (the Office of Price Administration) to food (the War Food Administration). Given the scale and scope of these agencies’ efforts, they did sometimes fail, and especially so when they carried with them the baggage of the New Deal. By the midpoint of America’s involvement in the war, for example, the Civilian Conservation Corps, the Works Progress Administration, and the Rural Electrification Administration — all prominent New Deal organizations which tried and failed to find a purpose in the mobilization bureaucracy — had been actually or virtually abolished.

Taxation

However, these agencies were often quite successful in achieving their respective, narrower aims. The Department of the Treasury, for instance, was remarkably successful at generating money to pay for the war, including the first general income tax in American history and the famous “war bonds” sold to the public. Beginning in 1940, the government extended the income tax to virtually all Americans and began collecting the tax via the now-familiar method of continuous withholdings from paychecks (rather than lump-sum payments after the fact). The number of Americans required to pay federal taxes rose from 4 million in 1939 to 43 million in 1945. With such a large pool of taxpayers, the American government took in $45 billion in 1945, an enormous increase over the $8.7 billion collected in 1941 but still far short of the $83 billion spent on the war in 1945. Over that same period, federal tax revenue grew from about 8 percent of GDP to more than 20 percent. Americans who earned as little as $500 per year paid income tax at a 23 percent rate, while those who earned more than $1 million per year paid a 94 percent rate. The average income tax rate peaked in 1944 at 20.9 percent (“Fact Sheet: Taxes”).

War Bonds

All told, taxes provided about $136.8 billion of the war’s total cost of $304 billion (Kennedy, 625). To cover the other $167.2 billion, the Treasury Department also expanded its bond program, creating the famous “war bonds” hawked by celebrities and purchased in vast numbers and enormous values by Americans. The first war bond was purchased by President Roosevelt on May 1, 1941 (“Introduction to Savings Bonds”). Though the bonds returned only 2.9 percent annual interest after a 10-year maturity, they nonetheless served as a valuable source of revenue for the federal government and an extremely important investment for many Americans. Bonds served as a way for citizens to make an economic contribution to the war effort, but because interest on them accumulated slower than consumer prices rose, they could not completely preserve income which could not be readily spent during the war. By the time war-bond sales ended in 1946, 85 million Americans had purchased more than $185 billion worth of the securities, often through automatic deductions from their paychecks (“Brief History of World War Two Advertising Campaigns: War Loans and Bonds”). Commercial institutions like banks also bought billions of dollars of bonds and other treasury paper, holding more than $24 billion at the war’s end (Kennedy, 626).

Price Controls and the Standard of Living

Fiscal and financial matters were also addressed by other federal agencies. For instance, the Office of Price Administration used its “General Maximum Price Regulation” (also known as “General Max”) to attempt to curtail inflation by maintaining prices at their March 1942 levels. In July, the National War Labor Board (NWLB; a successor to a New Deal-era body) limited wartime wage increases to about 15 percent, the factor by which the cost of living rose from January 1941 to May 1942. Neither “General Max” nor the wage-increase limit was entirely successful, though federal efforts did curtail inflation. Between April 1942 and June 1946, the period of the most stringent federal controls on inflation, the annual rate of inflation was just 3.5 percent; the annual rate had been 10.3 percent in the six months before April 1942 and it soared to 28.0 percent in the six months after June 1946 (Rockoff, “Price and Wage Controls in Four Wartime Periods,” 382).With wages rising about 65 percent over the course of the war, this limited success in cutting the rate of inflation meant that many American civilians enjoyed a stable or even improving quality of life during the war (Kennedy, 641). Improvement in the standard of living was not ubiquitous, however. In some regions, such as rural areas in the Deep South, living standards stagnated or even declined, and according to some economists, the national living standard barely stayed level or even declined (Higgs, 1992).

Labor Unions

Labor unions and their members benefited especially. The NWLB’s “maintenance-of-membership” rule allowed unions to count all new employees as union members and to draw union dues from those new employees’ paychecks, so long as the unions themselves had already been recognized by the employer. Given that most new employment occurred in unionized workplaces, including plants funded by the federal government through defense spending, “the maintenance-of-membership ruling was a fabulous boon for organized labor,” for it required employers to accept unions and allowed unions to grow dramatically: organized labor expanded from 10.5 million members in 1941 to 14.75 million in 1945 (Blum, 140). By 1945, approximately 35.5 percent of the non-agricultural workforce was unionized, a record high.

The War Economy at High Water

Despite the almost-continual crises of the civilian war agencies, the American economy expanded at an unprecedented (and unduplicated) rate between 1941 and 1945. The gross national product of the U.S., as measured in constant dollars, grew from $88.6 billion in 1939 — while the country was still suffering from the depression — to $135 billion in 1944. War-related production skyrocketed from just two percent of GNP to 40 percent in 1943 (Milward, 63).

As Table 2 shows, output in many American manufacturing sectors increased spectacularly from 1939 to 1944, the height of war production in many industries.

Table 2: Indices of American Manufacturing Output (1939 = 100)

1940

1941

1942

1943

1944

Aircraft

245

630

1706

2842

2805

Munitions

140

423

2167

3803

2033

Shipbuilding

159

375

1091

1815

1710

Aluminum

126

189

318

561

474

Rubber

109

144

152

202

206

Steel

131

171

190

202

197

Source: Milward, 69.

Expansion of Employment

The wartime economic boom spurred and benefited from several important social trends. Foremost among these trends was the expansion of employment, which paralleled the expansion of industrial production. In 1944, unemployment dipped to 1.2 percent of the civilian labor force, a record low in American economic history and as near to “full employment” as is likely possible (Samuelson). Table 3 shows the overall employment and unemployment figures during the war period.

Not only those who were unemployed during the depression found jobs. So, too, did about 10.5 million Americans who either could not then have had jobs (the 3.25 million youths who came of age after Pearl Harbor) or who would not have then sought employment (3.5 million women, for instance). By 1945, the percentage of blacks who held war jobs — eight percent — approximated blacks’ percentage in the American population — about ten percent (Kennedy, 775). Almost 19 million American women (including millions of black women) were working outside the home by 1945. Though most continued to hold traditional female occupations such as clerical and service jobs, two million women did labor in war industries (half in aerospace alone) (Kennedy, 778). Employment did not just increase on the industrial front. Civilian employment by the executive branch of the federal government — which included the war administration agencies — rose from about 830,000 in 1938 (already a historical peak) to 2.9 million in June 1945 (Nash, 220).

Population Shifts

Migration was another major socioeconomic trend. The 15 million Americans who joined the military — who, that is, became employees of the military — all moved to and between military bases; 11.25 million ended up overseas. Continuing the movements of the depression era, about 15 million civilian Americans made a major move (defined as changing their county of residence). African-Americans moved with particular alacrity and permanence: 700,000 left the South and 120,000 arrived in Los Angeles during 1943 alone. Migration was especially strong along rural-urban axes, especially to war-production centers around the country, and along an east-west axis (Kennedy, 747-748, 768). For instance, as Table 4 shows, the population of the three Pacific Coast states grew by a third between 1940 and 1945, permanently altering their demographics and economies.

Table 4: Population Growth in Washington, Oregon, and California, 1940-1945

(populations in millions)

1940

1941

1942

1943

1944

1945

% growth
1940-1945

Washington

1.7

1.8

1.9

2.1

2.1

2.3

35.3%

Oregon

1.1

1.1

1.1

1.2

1.3

1.3

18.2%

California

7.0

7.4

8.0

8.5

9.0

9.5

35.7%

Total

9.8

10.3

11.0

11.8

12.4

13.1

33.7%

Source: Nash, 222.

A third wartime socioeconomic trend was somewhat ironic, given the reduction in the supply of civilian goods: rapid increases in many Americans’ personal incomes. Driven by the federal government’s abilities to prevent price inflation and to subsidize high wages through war contracting and by the increase in the size and power of organized labor, incomes rose for virtually all Americans — whites and blacks, men and women, skilled and unskilled. Workers at the lower end of the spectrum gained the most: manufacturing workers enjoyed about a quarter more real income in 1945 than in 1940 (Kennedy, 641). These rising incomes were part of a wartime “great compression” of wages which equalized the distribution of incomes across the American population (Goldin and Margo, 1992). Again focusing on three war-boom states in the West, Table 5 shows that personal-income growth continued after the war, as well.

Table 5: Personal Income per Capita in Washington, Oregon, and California, 1940 and 1948

Despite the focus on military-related production in general and the impact of rationing in particular, spending in many civilian sectors of the economy rose even as the war consumed billions of dollars of output. Hollywood boomed as workers bought movie tickets rather than scarce clothes or unavailable cars. Americans placed more legal wagers in 1943 and 1944, and racetracks made more money than at any time before. In 1942, Americans spent $95 million on legal pharmaceuticals, $20 million more than in 1941. Department-store sales in November 1944 were greater than in any previous month in any year (Blum, 95-98). Black markets for rationed or luxury goods — from meat and chocolate to tires and gasoline — also boomed during the war.

Scientific and Technological Innovation

As observers during the war and ever since have recognized, scientific and technological innovations were a key aspect in the American war effort and an important economic factor in the Allies’ victory. While all of the major belligerents were able to tap their scientific and technological resources to develop weapons and other tools of war, the American experience was impressive in that scientific and technological change positively affected virtually every facet of the war economy.

The Manhattan Project

American techno-scientific innovations mattered most dramatically in “high-tech” sectors which were often hidden from public view by wartime secrecy. For instance, the Manhattan Project to create an atomic weapon was a direct and massive result of a stunning scientific breakthrough: the creation of a controlled nuclear chain reaction by a team of scientists at the University of Chicago in December 1942. Under the direction of the U.S. Army and several private contractors, scientists, engineers, and workers built a nationwide complex of laboratories and plants to manufacture atomic fuel and to fabricate atomic weapons. This network included laboratories at the University of Chicago and the University of California-Berkeley, uranium-processing complexes at Oak Ridge, Tennessee, and Hanford, Washington, and the weapon-design lab at Los Alamos, New Mexico. The Manhattan Project climaxed in August 1945, when the United States dropped two atomic weapons on Hiroshima and Nagasaki, Japan; these attacks likely accelerated Japanese leaders’ decision to seek peace with the United States. By that time, the Manhattan Project had become a colossal economic endeavor, costing approximately $2 billion and employing more than 100,000.

Though important and gigantic, the Manhattan Project was an anomaly in the broader war economy. Technological and scientific innovation also transformed less-sophisticated but still complex sectors such as aerospace or shipbuilding. The United States, as David Kennedy writes, “ultimately proved capable of some epochal scientific and technical breakthroughs, [but] innovated most characteristically and most tellingly in plant layout, production organization, economies of scale, and process engineering” (Kennedy, 648).

Aerospace

Aerospace provides one crucial example. American heavy bombers, like the B-29 Superfortress, were highly sophisticated weapons which could not have existed, much less contributed to the air war on Germany and Japan, without innovations such as bombsights, radar, and high-performance engines or advances in aeronautical engineering, metallurgy, and even factory organization. Encompassing hundreds of thousands of workers, four major factories, and $3 billion in government spending, the B-29 project required almost unprecedented organizational capabilities by the U.S. Army Air Forces, several major private contractors, and labor unions (Vander Meulen, 7). Overall, American aircraft production was the single largest sector of the war economy, costing $45 billion (almost a quarter of the $183 billion spent on war production), employing a staggering two million workers, and, most importantly, producing over 125,000 aircraft, which Table 6 describe in more detail.

Table 6: Production of Selected U.S. Military Aircraft (1941-1945)

Bombers

49,123

Fighters

63,933

Cargo

14,710

Total

127,766

Source: Air Force History Support Office

Shipbuilding

Shipbuilding offers a third example of innovation’s importance to the war economy. Allied strategy in World War II utterly depended on the movement of war materiel produced in the United States to the fighting fronts in Africa, Europe, and Asia. Between 1939 and 1945, the hundred merchant shipyards overseen by the U.S. Maritime Commission (USMC) produced 5,777 ships at a cost of about $13 billion (navy shipbuilding cost about $18 billion) (Lane, 8). Four key innovations facilitated this enormous wartime output. First, the commission itself allowed the federal government to direct the merchant shipbuilding industry. Second, the commission funded entrepreneurs, the industrialist Henry J. Kaiser chief among them, who had never before built ships and who were eager to use mass-production methods in the shipyards. These methods, including the substitution of welding for riveting and the addition of hundreds of thousands of women and minorities to the formerly all-white and all-male shipyard workforces, were a third crucial innovation. Last, the commission facilitated mass production by choosing to build many standardized vessels like the ugly, slow, and ubiquitous “Liberty” ship. By adapting well-known manufacturing techniques and emphasizing easily-made ships, merchant shipbuilding became a low-tech counterexample to the atomic-bomb project and the aerospace industry, yet also a sector which was spectacularly successful.

Reconversion and the War’s Long-term Effects

Reconversion from military to civilian production had been an issue as early as 1944, when WPB Chairman Nelson began pushing to scale back war production in favor of renewed civilian production. The military’s opposition to Nelson had contributed to the accession by James Byrnes and the OWM to the paramount spot in the war-production bureaucracy. Meaningful planning for reconversion was postponed until 1944 and the actual process of reconversion only began in earnest in early 1945, accelerating through V-E Day in May and V-J Day in September.

The most obvious effect of reconversion was the shift away from military production and back to civilian production. As Table 7 shows, this shift — as measured by declines in overall federal spending and in military spending — was dramatic, but did not cause the postwar depression which many Americans dreaded. Rather, American GDP continued to grow after the war (albeit not as rapidly as it had during the war; compare Table 1). The high level of defense spending, in turn, contributed to the creation of the “military-industrial complex,” the network of private companies, non-governmental organizations, universities, and federal agencies which collectively shaped American national defense policy and activity during the Cold War.

Reconversion spurred the second major restructuring of the American workplace in five years, as returning servicemen flooded back into the workforce and many war workers left, either voluntarily or involuntarily. For instance, many women left the labor force beginning in 1944 — sometimes voluntarily and sometimes involuntarily. In 1947, about a quarter of all American women worked outside the home, roughly the same number who had held such jobs in 1940 and far off the wartime peak of 36 percent in 1944 (Kennedy, 779).

G.I. Bill

Servicemen obtained numerous other economic benefits beyond their jobs, including educational assistance from the federal government and guaranteed mortgages and small-business loans via the Serviceman’s Readjustment Act of 1944 or “G.I. Bill.” Former servicemen thus became a vast and advantaged class of citizens which demanded, among other goods, inexpensive, often suburban housing; vocational training and college educations; and private cars which had been unobtainable during the war (Kennedy, 786-787).

The U.S.’s Position at the End of the War

At a macroeconomic scale, the war not only decisively ended the Great Depression, but created the conditions for productive postwar collaboration between the federal government, private enterprise, and organized labor, the parties whose tripartite collaboration helped engender continued economic growth after the war. The U.S. emerged from the war not physically unscathed, but economically strengthened by wartime industrial expansion, which placed the United States at absolute and relative advantage over both its allies and its enemies.

Possessed of an economy which was larger and richer than any other in the world, American leaders determined to make the United States the center of the postwar world economy. American aid to Europe ($13 billion via the Economic Recovery Program (ERP) or “Marshall Plan,” 1947-1951) and Japan ($1.8 billion, 1946-1952) furthered this goal by tying the economic reconstruction of West Germany, France, Great Britain, and Japan to American import and export needs, among other factors. Even before the war ended, the Bretton Woods Conference in 1944 determined key aspects of international economic affairs by establishing standards for currency convertibility and creating institutions such as the International Monetary Fund and the precursor of the World Bank.

In brief, as economic historian Alan Milward writes, “the United States emerged in 1945 in an incomparably stronger position economically than in 1941″… By 1945 the foundations of the United States’ economic domination over the next quarter of a century had been secured”… [This] may have been the most influential consequence of the Second World War for the post-war world” (Milward, 63).

Selected References

Adams, Michael C.C. The Best War Ever: America and World War II. Baltimore: Johns Hopkins University Press, 1994.

Anderson, Karen. Wartime Women: Sex Roles, Family Relations, and the Status of Women during World War II. Westport, CT: Greenwood Press, 1981.

Brody, David. “The New Deal and World War II.” In The New Deal, vol. 1, The National Level, edited by John Braeman, Robert Bremmer, and David Brody, 267-309. Columbus: Ohio State University Press, 1975.

Connery, Robert. The Navy and Industrial Mobilization in World War II. Princeton: Princeton University Press, 1951.

Darby, Michael R. “Three-and-a-Half Million U.S. Employees Have Been Mislaid: Or, an Explanation of Unemployment, 1934-1941.” Journal of Political Economy 84, no. 1 (February 1976): 1-16.

Field, Alexander J. “U.S. Productivity Growth in the Interwar Period and the 1990s.” (Paper presented at “Understanding the 1990s: the Long Run Perspective” conference, Duke University and the University of North Carolina, March 26-27, 2004) Available at www.unc.edu/depts/econ/seminars/Field.pdf.

Fischer, Gerald J. A Statistical Summary of Shipbuilding under the U.S. Maritime Commission during World War II. Washington, DC: Historical Reports of War Administration; United States Maritime Commission, no. 2, 1949.

Friedberg, Aaron. In the Shadow of the Garrison State. Princeton: Princeton University Press, 2000.

Goldin, Claudia. “The Role of World War II in the Rise of Women’s Employment.” American Economic Review 81, no. 4 (September 1991): 741-56.

Goldin, Claudia and Robert A. Margo. “The Great Compression: Wage Structure in the United States at Mid-Century.” Quarterly Journal of Economics 107, no. 2 (February 1992): 1-34.

Harrison, Mark, editor. The Economics of World War II: Six Great Powers in International Comparison. Cambridge: Cambridge University Press, 1998.

Higgs, Robert. “Wartime Prosperity? A Reassessment of the U.S. Economy in the 1940s.” Journal of Economic History 52, no. 1 (March 1992): 41-60.

Holley, I.B. Buying Aircraft: Materiel Procurement for the Army Air Forces. Washington, DC: U.S. Government Printing Office, 1964.

Hooks, Gregory. Forging the Military-Industrial Complex: World War II’s Battle of the Potomac. Urbana: University of Illinois Press, 1991.

Janeway, Eliot. The Struggle for Survival: A Chronicle of Economic Mobilization in World War II. New Haven: Yale University Press, 1951.

Jeffries, John W. Wartime America: The World War II Home Front. Chicago: Ivan R. Dee, 1996.

Johnston, Louis and Samuel H. Williamson. “The Annual Real and Nominal GDP for the United States, 1789 – Present.” Available at Economic History Services, March 2004, URL: http://www.eh.net/hmit/gdp/; accessed 3 June 2005.

Kennedy, David M. Freedom from Fear: The American People in Depression and War, 1929-1945. New York: Oxford University Press, 1999.

Kryder, Daniel. Divided Arsenal: Race and the American State during World War II. New York: Cambridge University Press, 2000.

Lane, Frederic, with Blanche D. Coll, Gerald J. Fischer, and David B. Tyler. Ships for Victory: A History of Shipbuilding under the U.S. Maritime Commission in World War II. Baltimore: Johns Hopkins University Press, 1951; republished, 2001.

Koistinen, Paul A.C. Arsenal of World War II: The Political Economy of American Warfare, 1940-1945. Lawrence, KS: University Press of Kansas, 2004.

Lichtenstein, Nelson. Labor’s War at Home: The CIO in World War II. New York: Cambridge University Press, 1982.

Lingeman, Richard P. Don’t You Know There’s a War On? The American Home Front, 1941-1945. New York: G.P. Putnam’s Sons, 1970.

Milkman, Ruth. Gender at Work: The Dynamics of Job Segregation by Sex during World War II. Urbana: University of Illinois Press, 1987.

Hugh Rockoff, Rutgers University

Although the United States was actively involved in World War I for only nineteen months, from April 1917 to November 1918, the mobilization of the economy was extraordinary. (See the chronology at the end for key dates). Over four million Americans served in the armed forces, and the U.S. economy turned out a vast supply of raw materials and munitions. The war in Europe, of course, began long before the United States entered. On June 28, 1914 in Sarajevo Gavrilo Princip, a young Serbian revolutionary, shot and killed Austrian Archduke Franz Ferdinand and his wife Sophie. A few months later the great powers of Europe were at war.

Many Europeans entered the war thinking that victory would come easily. Few had the understanding shown by a 26 year-old conservative Member of Parliament, Winston Churchill, in 1901. “I have frequently been astonished to hear with what composure and how glibly Members, and even Ministers, talk of a European War.” He went on to point out that in the past European wars had been fought by small professional armies, but in the future huge populations would be involved, and he predicted that a European war would end “in the ruin of the vanquished and the scarcely less fatal commercial dislocation and exhaustion of the conquerors.”[1]

Reasons for U.S. Entry into the War

Once the war began, however, it became clear that Churchill was right. By the time the United States entered the war Americans knew that the price of victory would be high. What, then, impelled the United States to enter? What role did economic forces play? One factor was simply that Americans generally – some ethnic minorities were exceptions – felt stronger ties to Britain and France than to Germany and Austria. By 1917 it was clear that Britain and France were nearing exhaustion, and there was considerable sentiment in the United States for saving our traditional allies.

The insistence of the United States on her trading rights was also important. Soon after the war began Britain, France, and their allies set up a naval blockade of Germany and Austria. Even food was contraband. The Wilson Administration complained bitterly that the blockade violated international law. U.S. firms took to using European neutrals, such as Sweden, as intermediaries. Surely, the Americans argued, international law protected the right of one neutral to trade with another. Britain and France responded by extending the blockade to include the Baltic neutrals. The situation was similar to the difficulties the United States experienced during the Napoleonic wars, which drove the United States into a quasi-war against France, and to war against Britain.

Ultimately, however, it was not the conventional surface vessels used by Britain and France to enforce its blockade that enraged American opinion, but rather submarines used by Germany. When the British (who provided most of the blockading ships) intercepted an American ship, the ship was escorted into a British port, the crew was well treated, and there was a chance of damage payments if it turned out that the interception was a mistake. The situation was very different when the Germans turned to submarine warfare. German submarines attacked without warning, and passengers had little chance of to save themselves. To many Americans this was a brutal violation of the laws of war. The Germans felt they had to use submarines because their surface fleet was too small to defeat the British navy let alone establish an effective counter-blockade.

The first submarine attack to inflame American opinion was the sinking of the Lusitania in May 1915. The Lusitania left New York with a cargo of passengers and freight, including war goods. When the ship was sunk over 1150 passengers were lost including 115 Americans. In the months that followed further sinkings brought more angry warnings from President Wilson. For a time the Germans gave way and agreed to warn American ships before sinking them and to save their passengers. In February 1917, however, the Germans renewed unrestricted submarine warfare in an attempt to starve Britain into submission. The loss of several U.S. ships was a key factor in President Wilson’s decision to break diplomatic relations with Germany and to seek a declaration of war.

U.S. Entry into the War and the Costs of Lost Trade

From a crude dollar-and-cents point of view it is hard to justify the war based on the trade lost to the United States. U.S. exports to Europe rose from $1.479 billion dollars in 1913 to $4.062 billion in 1917. Suppose that the United States had stayed out of the war, and that as a result all trade with Europe was cut off. Suppose further, that the resources that would have been used to produce exports for Europe were able to produce only half as much value when reallocated to other purposes such as producing goods for the domestic market or exports for non-European countries. Then the loss of output in 1917 would have been $2.031 billion per year. This was about 3.7 percent of GNP in 1917, and only about 6.3 percent of the total U.S. cost of the war.[2]

On March 21, 1918 the Germans launched a massive offensive on the Somme battlefield and successfully broke through the Allied lines. In May and early June, after U.S. entry into the war, the Germans followed up with fresh attacks that brought them within fifty miles of Paris. Although a small number of Americans participated it was mainly the old war: the Germans against the British and the French. The arrival of large numbers of Americans, however, rapidly changed the course of the war. The turning point was the Second Battle of the Marne fought between July 18 and August 6. The Allies, bolstered by significant numbers of Americans, halted the German offensive.

The initiative now passed to the Allies. They drove the Germans back in a series of attacks in which American troops played an increasingly important role. The first distinctively American offensive was the battle of the St. Mihiel Salient fought from September 12 to September 16, 1918; over half a million U.S. troops participated. The last major offensive of the war, the Meuse-Argonne offensive, was launched on September 26, with British, French, and American forces attacking the Germans on a broad front. The Germans now realized that their military situation was deteriorating rapidly, and that they would have to agree to end to the fighting. The Armistice occurred on November 11, 1918 – at the eleventh hour, of the eleventh day, of the eleventh month.

Mobilizing the Economy

The first and most important mobilization decision was the size of the army. When the United States entered the war, the army stood at 200,000, hardly enough to have a decisive impact in Europe. However, on May 18, 1917 a draft was imposed and the numbers were increased rapidly. Initially, the expectation was that the United States would mobilize an army of one million. The number, however, would go much higher. Overall some 4,791,172 Americans would serve in World War I. Some 2,084,000 would reach France, and 1,390,000 would see active combat.

Once the size of the Army had been determined, the demands on the economy became obvious, although the means to satisfy them did not: food and clothing, guns and ammunition, places to train, and the means of transport. The Navy also had to be expanded to protect American shipping and the troop transports. Contracts immediately began flowing from the Army and Navy to the private sector. The result, of course, was a rapid increase in federal spending from $477 million in 1916 to a peak of $8,450 million in 1918. (See Table 1 below for this and other data on the war effort.) The latter figure amounted to over 12 percent of GNP, and that amount excludes spending by other wartime agencies and spending by allies, much of which was financed by U.S. loans.

4. U.S. Bureau of the Census (1975), series Y458 and Y459. The estimates are the average for fiscal year t and fiscal year t+1.

5. Friedman and Schwartz (1970, table 1, June dates).

6-8. Balke and Gordon (1989, table 10, pp. 84-85).The original series were in 1982 dollars.

9. U.S. Bureau of the Census (1975), series D740.

10-11. Kendrick (1961, table A-VI, p. 306; table A-X, p. 312).

Although the Army would number in the millions, raising these numbers did not prove to be an unmanageable burden for the U.S economy. The total labor force rose from about 40 million in 1916 to 44 million in 1918. This increase allowed the United States to field a large military while still increasing the labor force in the nonfarm private sector from 27.8 million in 1916 to 28.6 million in 1918. Real wages rose in the industrial sector during the war, perhaps by six or seven percent, and this increase combined with the ease of finding work was sufficient to draw many additional workers into the labor force.[3] Many of the men drafted into the armed forces were leaving school and would have been entering the labor force for the first time in any case. The farm labor force did drop slightly from 10.5 million in 1916 to 10.3 million workers in 1918, but farming included many low-productivity workers and farm output on the whole was sustained. Indeed, the all-important category of food grains showed strong increases in 1918 and 1919.

Figure 1 shows production of steel ingots and “total industrial production” – an index of steel, copper, rubber, petroleum, and so on – monthly from January 1914 through 1920.[4] It is evident that the United States built up its capacity to turn out these basic raw materials during the years of U.S. neutrality when Britain and France were its buying supplies and the United States was beginning its own tentative build up. The United States then simply maintained the output of these materials during the years of active U.S. involvement and concentrated on turning these materials into munitions.[5]

Figure 1

Prices on the New York Stock Exchange, shown in Figure 2, provide some insight into what investors thought about the strength of the economy during the war era. The upper line shows the Standard and Poor’s/Cowles Commission Index. The lower line shows the “real” price of stocks – the nominal index divided by the consumer price index. When the war broke out the New York Stock Exchange was closed to prevent panic selling. There are no prices for the New York Stock Exchange, although a lively “curb market” did develop. After the market reopened it rose as investors realized that the United States would profit as a neutral. The market then began a long slide that began when tensions between the United States and Germany rose at the end of 1916 and continued after the United States entered the war. A second, less rise began in the spring of 1918 when an Allied victory began to seem possible. The increase continued and gathered momentum after the Armistice. In real terms, however, as shown by the lower line in the figure, the rise in the stock market was not sufficient to offset the rise in consumer prices. At times one hears that war is good for the stock market, but the figures for World War I, as the figures for other wars, tell a more complex story.[6]

Figure 2

Table 2 shows the amounts of some of the key munitions produced during the war. During and after the war critics complained that the mobilization was too slow. American troops, for example, often went into battle with French artillery, clearly evidence, the critics implied, of incompetence somewhere in the supply chain. It does take time, however, to convert existing factories or build new ones and to work out the details of the production and distribution process. The last column of Table 2 shows peak monthly production, usually October 1918, at an annual rate. It is obvious that by the end of the war the United States was beginning to achieve the “production miracle” that occurred in World War II. When Franklin Roosevelt called for 50,000 planes in World War II, his demand was seen as an astounding exercise in bravado. Yet when we look at the last column of the table we see that the United States was hitting this level of production for Liberty engines by the end World War I. There were efforts during the war to coordinate Allied production. To some extent this was tried – the United States produced much of the smokeless powder used by the Allies – but it was always clear that the United States wanted its own army equipped with its own munitions.

Table 2
Production of Selected Munitions in World War I

Munition

Total Production

Peak monthly production at an annual rate

Rifles

3,550,000

3,252,000

Machine guns

226,557

420,000

Artillery units

3,077

4,920

Smokeless powder (pounds)

632,504,000

n.a.

Toxic Gas (tons)

10,817

32,712

De Haviland-4 bombers

3,227

13,200

Liberty airplane engines

13,574

46,200

Source: Ayres (1919, passim)

Financing the War

Where did the money come from to buy all these munitions? Then as now there were, the experts agreed, three basic ways to raise the money: (1) raising taxes, (2) borrowing from the public, and (3) printing money. In the Civil War the government had had simply printed the famous greenbacks. In World War I it was possible to “print money” in a more roundabout way. The government could sell a bond to the newly created Federal Reserve. The Federal Reserve would pay for it by creating a deposit account for the government, which the government could then draw upon to pay its expenses. If the government first sold the bond to the general public, the process of money creation would be even more roundabout. In the end the result would be much the same as if the government had simply printed greenbacks: the government would be paying for the war with newly created money. The experts gave little consideration to printing money. The reason may be that the gold standard was sacrosanct. A financial policy that would cause inflation and drive the United States off the gold standard was not to be taken seriously. Some economists may have known the history of the greenbacks of the Civil War and the inflation they had caused.

The real choice appeared to be between raising taxes and borrowing from the public. Most economists of the World War I era believed that raising taxes was best. Here they were following a tradition that stretched back to Adam Smith who argued that it was necessary to raise taxes in order to communicate the true cost of war to the public. During the war Oliver Morton Sprague, one of the leading economists of the day, offered another reason for avoiding borrowing. It was unfair, Sprague argued, to draft men into the armed forces and then expect them to come home and pay higher taxes to fund the interest and principal on war bonds. Most men of affairs, however, thought that some balance would have to be struck between taxes and borrowing. Treasury Secretary William Gibbs McAdoo thought that financing about 50 percent from taxes and 50 percent from bonds would be about right. Financing more from taxes, especially progressive taxes, would frighten the wealthier classes and undermine their support for the war.

In October 1917 Congress responded to the call for higher taxes with the War Revenue Act. This act increased the personal and corporate income tax rates and established new excise, excess-profit, and luxury taxes. The tax rate for an income of $10,000 with four exemptions (about $140,000 in 2003 dollars) went from 1.2 percent in 1916 to 7.8 percent. For incomes of $1,000,000 the rate went from 10.3 percent in 1916 to 70.3 percent in 1918. These increase in taxes and the increase in nominal income raised revenues from $930 million in 1916 to $4,388 million in 1918. Federal expenditures, however, increased from $1,333 million in 1916 to $15,585 million in 1918. A huge gap had opened up that would have to be closed by borrowing.

Short-term borrowing was undertaken as a stopgap. To reduce the pressure on the Treasury and the danger of a surge in short-term rates, however, it was necessary to issue long-term bonds, so the Treasury created the famous Liberty Bonds. The first issue was a thirty-year bond bearing a 3.5% coupon callable after fifteen years. There were three subsequent issues of Liberty Bonds, and one of shorter-term Victory Bonds after the Armistice. In all, the sale of these bonds raised over $20 billion dollars for the war effort.

In order to strengthen the market for Liberty Bonds, Secretary McAdoo launched a series of nationwide campaigns. Huge rallies were held in which famous actors, such as Charlie Chaplin, urged the crowds to buy Liberty Bonds. The government also enlisted famous artists to draw posters urging people to purchase the bonds. One of these posters, which are widely sought by collectors, is shown below.

Louis Raemaekers. After a Zeppelin Raid in London: “But Mother Had Done Nothing Wrong, Had She, Daddy?” Prevent this in New York: Invest in Liberty Bonds. 19″ x 12.” From the Rutgers University Library Collection of Liberty Bond Posters.

Although the campaigns may have improved the morale of both the armed forces and the people at home, how much the campaigns contributed to expanding the market for the bonds is an open question. The bonds were tax-exempt – the exact degree of exemption varied from issue to issue – and this undoubtedly made them attractive to investors in high tax brackets. Indeed, the Treasury was criticized for imposing high marginal taxes with one hand, and then creating a loophole with the other. The Federal Reserve also bought many of the bonds creating new money. Some of this new “highpowered money” augmented the reserves of the commercial banks which allowed them to buy bonds or to finance their purchase by private citizens. Thus, directly or indirectly, a good deal of the support for the bond market was the result of money creation rather than savings by the general public.

Table 3 provides a rough breakdown of the means used to finance the war. Of the total cost of the war, about 22 percent was financed by taxes and from 20 to 25 percent by printing money, which meant that from 53 to 58 percent was financed through the bond issues.

Table 3
Financing World War I, March 1917-May 1919

Source of finance

Billions of Dollars

Percent (M2)

Percent (M4)

Taxation and nontax receipts

7.3

22

22

Borrowing from the public

24

58

53

Direct money creation

1.6

5

5

Indirect money creation (M2)

4.8

15

Indirect money creation (M4)

6.6

20

Total cost of the war

32.9

100

100

Note: Direct money creation is the increase in the stock of high-powered money net of the increase in monetary gold. Indirect money creation is the increase in monetary liabilities not matched by the increase in high-powered money.

Source: Friedman and Schwartz (1963, 221)

Heavy reliance on the Federal Reserve meant, of course, that the stock of money increased rapidly. As shown in Table 1, the stock of money rose from $20.7 billion in 1916 to $35.1 billion in 1920, about 70 percent. The price level (GDP deflator) increased 85 percent over the same period.

The Government’s Role in Mobilization

Once the contracts for munitions were issued and the money began flowing, the government might have relied on the price system to allocate resources. This was the policy followed during the Civil War. For a number of reasons, however, the government attempted to manage the allocation of resources from Washington. For one thing, the Wilson administration, reflecting the Progressive wing of the Democratic Party, was suspicious of the market, and doubted its ability to work quickly and efficiently, and to protect the average person against profiteering. Another factor was simply that the European belligerents had adopted wide-ranging economic controls and it made sense for the United States, a latecomer, to follow suit.

A wide variety of agencies were created to control the economy during the mobilization. A look at four of the most important – (1) the Food Administration, (2) the Fuel Administration, (3) the Railroad Administration, and (4) the War Industries Board – will suggest the extent to which the United States turned away from its traditional reliance on the market. Unfortunately, space precludes a review of many of the other agencies such as the War Shipping Board, which built noncombatant ships, the War Labor Board, which attempted to settle labor disputes, and the New Issues Committee, which vetted private issues of stocks and bonds.

Food Administration

The Food Administration was created by the Lever Food and Fuel Act in August 1917. Herbert Hoover, who had already won international fame as a relief administrator in China and Europe, was appointed to head it. The mission of the Food Administration was to stimulate the production of food and assure a fair distribution among American civilians, the armed forces, and the Allies, and at a fair price. The Food Administration did not attempt to set maximum prices at retail or (with the exception of sugar) to ration food. The Act itself set what then was a high minimum price for wheat – the key grain in international markets – at the farm gate, although the price would eventually go higher. The markups of processors and distributors were controlled by licensing them and threatening to take their licenses away if they did not cooperate. The Food Administration then attempted control prices and quantities at retail through calls for voluntary cooperation. Millers were encouraged to tie the sale of wheat flour to the sale of less desirable flours – corn meal, potato flour, and so on – thus making a virtue out of a practice that would have been regarded as a disreputable evasion of formal price ceilings. Bakers were encouraged to bake “Victory bread,” which included a wheat-flour substitute. Finally, Hoover urged Americans to curtail their consumption of the most valuable foodstuffs: there were, for example, Meatless Mondays and Wheatless Wednesdays.

Fuel Administration

The Fuel Administration was created under the same Act as the Food Administration. Harry Garfield, the son of President James Garfield, and the President of Williams College, was appointed to head it. Its main problem was controlling the price and distribution of bituminous coal. In the winter of 1918 a variety of factors combined to cause a severe coal shortage that forced school and factory closures. The Fuel Administration set the price of coal at the mines and the margins of dealers, mediated disputes in the coalfields, and worked with the Railroad Administration (described below) to reduce long hauls of coal.

Railroad Administration

The Wilson Administration nationalized the railroads and put them under the control of the Railroad Administration in December of 1917, in response to severe congestion in the railway network that was holding up the movement of war goods and coal. Wilson’s energetic Secretary of the Treasury (and son-in-law), William Gibbs McAdoo, was appointed to head it. The railroads would remain under government control for another 26 months. There has been considerable controversy over how well the system worked under federal control. Defenders of the takeover point out that the congestion was relieved and that policies that increased standardization and eliminated unnecessary competition were put in place. Critics of the takeover point to the large deficit that was incurred, nearly $1.7 billion, and to the deterioration of the capital stock of the industry. William J. Cunningham’s (1921) two papers in the Quarterly Journal of Economics, although written shortly after the event, still provide one of the most detailed and fair-minded treatments of the Railroad Administration.

War Industries Board

The most important federal agency, at least in terms of the scope of its mission, was the War Industries Board. The Board was established in July of 1917. Its purpose was no less than to assure the full mobilization of the nation’s resources for the purpose of winning the war. Initially the Board relied on persuasion to make its orders effective, but rising criticism of the pace of mobilization, and the problems with coal and transport in the winter of 1918, led to a strengthening of its role. In March 1918 the Board was reorganized, and Wilson placed Bernard Baruch, a Wall Street investor, in charge. Baruch installed a “priorities system” to determine the order in which contracts could be filled by manufacturers. Contracts rated AA by the War Industries Board had to be filled before contracts rated A, and so on. Although much hailed at the time, this system proved inadequate when tried in World War II. The War Industries Board also set prices of industrial products such as iron and steel, coke, rubber, and so on. This was handled by the Board’s independent Price Fixing Committee.

It is tempting to look at these experiments for clues on how the economy would perform under various forms of economic control. It is important, however, to keep in mind that these were very brief experiments. When the war ended in November 1918 most of the agencies immediately wound up their activities. Only the Railroad Administration and the War Shipping Board continued to operate. The War Industries Board, for example, was in operation only for a total of sixteen months; Bernard Baruch’s tenure was only eight months. Obviously only limited conclusions can be drawn from these experiments.

Costs of the War

The human and economic costs of the war were substantial. The death rate was high: 48,909 members of the armed forces died in battle, and 63,523 died from disease. Many of those who died from disease, perhaps 40,000, died from pneumonia during the influenza-pneumonia epidemic that hit at the end of the war. Some 230,074 members of the armed forces suffered nonmortal wounds.

John Maurice Clark provided what is still the most detailed and thoughtful estimate of the cost of the war; a total amount of about $32 billion. Clark tried to estimate what an economist would call the resource cost of the war. For that reason he included actual federal government spending on the Army and Navy, the amount of foreign obligations, and the difference between what government employees could earn in the private sector and what they actually earned. He excluded interest on the national debt and part of the subsidies paid to the Railroad Administration because he thought they were transfers. His estimate of $32 billion amounted to about 46 percent of GNP in 1918.

Long-run Economic Consequences

The war left a number of economic legacies. Here we will briefly describe three of the most important.

The finances of the federal government were permanently altered by the war. It is true that the tax increases put in place during the war were scaled back during the 1920s by successive Republican administrations. Tax rates, however, had to remain higher than before the war to pay for higher expenditures due mainly to interest on the national debt and veterans benefits.

The international economic position of the United States was permanently altered by the war. The United States had long been a debtor country. The United States emerged from the war, however, as a net creditor. The turnaround was dramatic. In 1914 U.S investments abroad amounted to $5.0 billion, while total foreign investments in the United States amounted to $7.2 billion. Americans were net debtors to the tune of $2.2 billion. By 1919 U.S investments abroad had risen to $9.7 billion, while total foreign investments in the United States had fallen to $3.3 billion: Americans were net creditors to the tune of $6.4 billion.[7] Before the war the center of the world capital market was London, and the Bank of England was the world’s most important financial institution; after the war leadership shifted to New York, and the role of the Federal Reserve was enhanced.

The management of the war economy by a phalanx of Federal agencies persuaded many Americans that the government could play an important positive role in the economy. This lesson remained dormant during the 1920s, but came to life when the United States faced the Great Depression. Both the general idea of fighting the Depression by creating federal agencies and many of the specific agencies and programs reflected precedents set in Word War I. The Civilian Conservation Corps, a Depression era agency that hired young men to work on conservation projects, for example, attempted to achieve the benefits of military training in a civilian setting. The National Industrial Recovery Act reflected ideas Bernard Baruch developed at the War Industries Board, and the Agricultural Adjustment Administration hearkened back to the Food Administration. Ideas about the appropriate role of the federal government in the economy, in other words, may have been the most important economic legacy of American involvement in World War I.

Clark, John Maurice. The Cost of the World War to the American People. New Haven: Yale University Press for the Carnegie Endowment for International Peace, 1931.

Cuff, Robert D. The War Industries Board: Business-Government Relations during World War I. Baltimore: Johns Hopkins University Press, 1973.

Cunningham, William J. “The Railroads under Government Operation. I: The Period to the Close of 1918.” Quarterly Journal of Economics 35, no. 2 (1921): 288-340. “II: From January 1, 1919, to March 1, 1920.” Quarterly Journal of Economics 36, no. 1. (1921): 30-71.

Friedman, Milton, and Anna J. Schwartz. A Monetary History of the United States, 1867-1960. Princeton: Princeton University Press, 1963.

Friedman, Milton, and Anna J. Schwartz. Monetary Statistics of the United States: Estimates, Sources, and Methods. New York: Columbia University Press, 1970.

Gilbert, Martin. The First World War: A Complete History. New York: Henry Holt, 1994.

Kendrick, John W. Productivity Trends in the United States. Princeton: Princeton University Press, 1961.

Koistinen, Paul A. C. Mobilizing for Modern War: The Political Economy of American Warfare, 1865-1919. Lawrence, KS: University Press of Kansas, 1997.

Endnotes

[2] U.S. exports to Europe are from U.S. Bureau of the Census (1975), series U324.

[3] Real wages in manufacturing were computed by dividing “Hourly Earnings in Manufacturing Industries” by the Consumer Price Index (U.S. Bureau of the Census 1975, series D766 and E135).

[4] Steel ingots are from the National Bureau of Economic Research, macrohistory database, series m01135a, www.nber.org. Total Industrial Production is from Miron and Romer (1990), Table 2.

[5] The sharp and temporary drop in the winter of 1918 was due to a shortage of coal.

[6] The chart shows end-of-month values of the S&P/Cowles Composite Stock Index, from Global Financial Data: http://www.globalfinancialdata.com/. To get real prices I divided this index by monthly values of the United States Consumer Price Index for all items. This is available as series 04128 in the National Bureau of Economic Research Macro-Data Base available at http://www.nber.org/.

[7] U.S. investments abroad (U.S. Bureau of the Census 1975, series U26); Foreign investments in the U.S. (U.S.

Gene Smiley, Marquette University

Introduction

The interwar period in the United States, and in the rest of the world, is a most interesting era. The decade of the 1930s marks the most severe depression in our history and ushered in sweeping changes in the role of government. Economists and historians have rightly given much attention to that decade. However, with all of this concern about the growing and developing role of government in economic activity in the 1930s, the decade of the 1920s often tends to get overlooked. This is unfortunate because the 1920s are a period of vigorous, vital economic growth. It marks the first truly modern decade and dramatic economic developments are found in those years. There is a rapid adoption of the automobile to the detriment of passenger rail travel. Though suburbs had been growing since the late nineteenth century their growth had been tied to rail or trolley access and this was limited to the largest cities. The flexibility of car access changed this and the growth of suburbs began to accelerate. The demands of trucks and cars led to a rapid growth in the construction of all-weather surfaced roads to facilitate their movement. The rapidly expanding electric utility networks led to new consumer appliances and new types of lighting and heating for homes and businesses. The introduction of the radio, radio stations, and commercial radio networks began to break up rural isolation, as did the expansion of local and long-distance telephone communications. Recreational activities such as traveling, going to movies, and professional sports became major businesses. The period saw major innovations in business organization and manufacturing technology. The Federal Reserve System first tested its powers and the United States moved to a dominant position in international trade and global business. These things make the 1920s a period of considerable importance independent of what happened in the 1930s.

National Product and Income and Prices

We begin the survey of the 1920s with an examination of the overall production in the economy, GNP, the most comprehensive measure of aggregate economic activity. Real GNP growth during the 1920s was relatively rapid, 4.2 percent a year from 1920 to 1929 according to the most widely used estimates. (Historical Statistics of the United States, or HSUS, 1976) Real GNP per capita grew 2.7 percent per year between 1920 and 1929. By both nineteenth and twentieth century standards these were relatively rapid rates of real economic growth and they would be considered rapid even today.

There were several interruptions to this growth. In mid-1920 the American economy began to contract and the 1920-1921 depression lasted about a year, but a rapid recovery reestablished full-employment by 1923. As will be discussed below, the Federal Reserve System’s monetary policy was a major factor in initiating the 1920-1921 depression. From 1923 through 1929 growth was much smoother. There was a very mild recession in 1924 and another mild recession in 1927 both of which may be related to oil price shocks (McMillin and Parker, 1994). The 1927 recession was also associated with Henry Ford’s shut-down of all his factories for six months in order to changeover from the Model T to the new Model A automobile. Though the Model T’s market share was declining after 1924, in 1926 Ford’s Model T still made up nearly 40 percent of all the new cars produced and sold in the United States. The Great Depression began in the summer of 1929, possibly as early as June. The initial downturn was relatively mild but the contraction accelerated after the crash of the stock market at the end of October. Real total GNP fell 10.2 percent from 1929 to 1930 while real GNP per capita fell 11.5 percent from 1929 to 1930.

Price changes during the 1920s are shown in Figure 2. The Consumer Price Index, CPI, is a better measure of changes in the prices of commodities and services that a typical consumer would purchase, while the Wholesale Price Index, WPI, is a better measure in the changes in the cost of inputs for businesses. As the figure shows the 1920-1921 depression was marked by extraordinarily large price decreases. Consumer prices fell 11.3 percent from 1920 to 1921 and fell another 6.6 percent from 1921 to 1922. After that consumer prices were relatively constant and actually fell slightly from 1926 to 1927 and from 1927 to 1928. Wholesale prices show greater variation. The 1920-1921 depression hit farmers very hard. Prices had been bid up with the increasing foreign demand during the First World War. As European production began to recover after the war prices began to fall. Though the prices of agricultural products fell from 1919 to 1920, the depression brought on dramatic declines in the prices of raw agricultural produce as well as many other inputs that firms employ. In the scramble to beat price increases during 1919 firms had built up large inventories of raw materials and purchased inputs and this temporary increase in demand led to even larger price increases. With the depression firms began to draw down those inventories. The result was that the prices of raw materials and manufactured inputs fell rapidly along with the prices of agricultural produce—the WPI dropped 45.9 percent between 1920 and 1921. The price changes probably tend to overstate the severity of the 1920-1921 depression. Romer’s recent work (1988) suggests that prices changed much more easily in that depression reducing the drop in production and employment. Wholesale prices in the rest of the 1920s were relatively stable though they were more likely to fall than to rise.

Economic Growth in the 1920s

Despite the 1920-1921 depression and the minor interruptions in 1924 and 1927, the American economy exhibited impressive economic growth during the 1920s. Though some commentators in later years thought that the existence of some slow growing or declining sectors in the twenties suggested weaknesses that might have helped bring on the Great Depression, few now argue this. Economic growth never occurs in all sectors at the same time and at the same rate. Growth reallocates resources from declining or slower growing sectors to the more rapidly expanding sectors in accordance with new technologies, new products and services, and changing consumer tastes.

Economic growth in the 1920s was impressive. Ownership of cars, new household appliances, and housing was spread widely through the population. New products and processes of producing those products drove this growth. The combination of the widening use of electricity in production and the growing adoption of the moving assembly line in manufacturing combined to bring on a continuing rise in the productivity of labor and capital. Though the average workweek in most manufacturing remained essentially constant throughout the 1920s, in a few industries, such as railroads and coal production, it declined. (Whaples 2001) New products and services created new markets such as the markets for radios, electric iceboxes, electric irons, fans, electric lighting, vacuum cleaners, and other laborsaving household appliances. This electricity was distributed by the growing electric utilities. The stocks of those companies helped create the stock market boom of the late twenties. RCA, one of the glamour stocks of the era, paid no dividends but its value appreciated because of expectations for the new company. Like the Internet boom of the late 1990s, the electricity boom of the 1920s fed a rapid expansion in the stock market.

Fed by continuing productivity advances and new products and services and facilitated by an environment of stable prices that encouraged production and risk taking, the American economy embarked on a sustained expansion in the 1920s.

Population and Labor in the 1920s

At the same time that overall production was growing, population growth was declining. As can be seen in Figure 3, from an annual rate of increase of 1.85 and 1.93 percent in 1920 and 1921, respectively, population growth rates fell to 1.23 percent in 1928 and 1.04 percent in 1929.

These changes in the overall growth rate were linked to the birth and death rates of the resident population and a decrease in foreign immigration. Though the crude death rate changed little during the period, the crude birth rate fell sharply into the early 1930s. (Figure 4) There are several explanations for the decline in the birth rate during this period. First, there was an accelerated rural-to-urban migration. Urban families have tended to have fewer children than rural families because urban children do not augment family incomes through their work as unpaid workers as rural children do. Second, the period also saw continued improvement in women’s job opportunities and a rise in their labor force participation rates.

Immigration also fell sharply. In 1917 the federal government began to limit immigration and in 1921 an immigration act limited the number of prospective citizens of any nationality entering the United States each year to no more than 3 percent of that nationality’s resident population as of the 1910 census. A new act in 1924 lowered this to 2 percent of the resident population at the 1890 census and more firmly blocked entry for people from central, southern, and eastern European nations. The limits were relaxed slightly in 1929.

The American population also continued to move during the interwar period. Two regions experienced the largest losses in population shares, New England and the Plains. For New England this was a continuation of a long-term trend. The population share for the Plains region had been rising through the nineteenth century. In the interwar period its agricultural base, combined with the continuing shift from agriculture to industry, led to a sharp decline in its share. The regions gaining population were the Southwest and, particularly, the far West.— California began its rapid growth at this time.

During the 1920s the labor force grew at a more rapid rate than population. This somewhat more rapid growth came from the declining share of the population less than 14 years old and therefore not in the labor force. In contrast, the labor force participation rates, or fraction of the population aged 14 and over that was in the labor force, declined during the twenties from 57.7 percent to 56.3 percent. This was entirely due to a fall in the male labor force participation rate from 89.6 percent to 86.8 percent as the female labor force participation rate rose from 24.3 percent to 25.1 percent. The primary source of the fall in male labor force participation rates was a rising retirement rate. Employment rates for males who were 65 or older fell from 60.1 percent in 1920 to 58.0 percent in 1930.

With the depression of 1920-1921 the unemployment rate rose rapidly from 5.2 to 8.7 percent. The recovery reduced unemployment to an average rate of 4.8 percent in 1923. The unemployment rate rose to 5.8 percent in the recession of 1924 and to 5.0 percent with the slowdown in 1927. Otherwise unemployment remained relatively low. The onset of the Great Depression from the summer of 1929 on brought the unemployment rate from 4.6 percent in 1929 to 8.9 percent in 1930. (Figure 5)

Earnings for laborers varied during the twenties. Table 1 presents average weekly earnings for 25 manufacturing industries. For these industries male skilled and semi-skilled laborers generally commanded a premium of 35 percent over the earnings of unskilled male laborers in the twenties. Unskilled males received on average 35 percent more than females during the twenties. Real average weekly earnings for these 25 manufacturing industries rose somewhat during the 1920s. For skilled and semi-skilled male workers real average weekly earnings rose 5.3 percent between 1923 and 1929, while real average weekly earnings for unskilled males rose 8.7 percent between 1923 and 1929. Real average weekly earnings for females rose on 1.7 percent between 1923 and 1929. Real weekly earnings for bituminous and lignite coal miners fell as the coal industry encountered difficult times in the late twenties and the real daily wage rate for farmworkers in the twenties, reflecting the ongoing difficulties in agriculture, fell after the recovery from the 1920-1921 depression.

The 1920s were not kind to labor unions even though the First World War had solidified the dominance of the American Federation of Labor among labor unions in the United States. The rapid growth in union membership fostered by federal government policies during the war ended in 1919. A committee of AFL craft unions undertook a successful membership drive in the steel industry in that year. When U.S. Steel refused to bargain, the committee called a strike, the failure of which was a sharp blow to the unionization drive. (Brody, 1965) In the same year, the United Mine Workers undertook a large strike and also lost. These two lost strikes and the 1920-21 depression took the impetus out of the union movement and led to severe membership losses that continued through the twenties. (Figure 6)

Under Samuel Gompers’s leadership, the AFL’s “business unionism” had attempted to promote the union and collective bargaining as the primary answer to the workers’ concerns with wages, hours, and working conditions. The AFL officially opposed any government actions that would have diminished worker attachment to unions by providing competing benefits, such as government sponsored unemployment insurance, minimum wage proposals, maximum hours proposals and social security programs. As Lloyd Ulman (1961) points out, the AFL, under Gompers’ direction, differentiated on the basis of whether the statute would or would not aid collective bargaining. After Gompers’ death, William Green led the AFL in a policy change as the AFL promoted the idea of union-management cooperation to improve output and promote greater employer acceptance of unions. But Irving Bernstein (1965) concludes that, on the whole, union-management cooperation in the twenties was a failure.

To combat the appeal of unions in the twenties, firms used the “yellow-dog” contract requiring employees to swear they were not union members and would not join one; the “American Plan” promoting the open shop and contending that the closed shop was un-American; and welfare capitalism. The most common aspects of welfare capitalism included personnel management to handle employment issues and problems, the doctrine of “high wages,” company group life insurance, old-age pension plans, stock-purchase plans, and more. Some firms formed company unions to thwart independent unionization and the number of company-controlled unions grew from 145 to 432 between 1919 and 1926.

Until the late thirties the AFL was a voluntary association of independent national craft unions. Craft unions relied upon the particular skills the workers had acquired (their craft) to distinguish the workers and provide barriers to the entry of other workers. Most craft unions required a period of apprenticeship before a worker was fully accepted as a journeyman worker. The skills, and often lengthy apprenticeship, constituted the entry barrier that gave the union its bargaining power. There were only a few unions that were closer to today’s industrial unions where the required skills were much less (or nonexistent) making the entry of new workers much easier. The most important of these industrial unions was the United Mine Workers, UMW.

The AFL had been created on two principles: the autonomy of the national unions and the exclusive jurisdiction of the national union.—Individual union members were not, in fact, members of the AFL; rather, they were members of the local and national union, and the national was a member of the AFL. Representation in the AFL gave dominance to the national unions, and, as a result, the AFL had little effective power over them. The craft lines, however, had never been distinct and increasingly became blurred. The AFL was constantly mediating jurisdictional disputes between member national unions. Because the AFL and its individual unions were not set up to appeal to and work for the relatively less skilled industrial workers, union organizing and growth lagged in the twenties.

Agriculture

The onset of the First World War in Europe brought unprecedented prosperity to American farmers. As agricultural production in Europe declined, the demand for American agricultural exports rose, leading to rising farm product prices and incomes. In response to this, American farmers expanded production by moving onto marginal farmland, such as Wisconsin cutover property on the edge of the woods and hilly terrain in the Ozark and Appalachian regions. They also increased output by purchasing more machinery, such as tractors, plows, mowers, and threshers. The price of farmland, particularly marginal farmland, rose in response to the increased demand, and the debt of American farmers increased substantially.

This expansion of American agriculture continued past the end of the First World War as farm exports to Europe and farm prices initially remained high. However, agricultural production in Europe recovered much faster than most observers had anticipated. Even before the onset of the short depression in 1920, farm exports and farm product prices had begun to fall. During the depression, farm prices virtually collapsed. From 1920 to 1921, the consumer price index fell 11.3 percent, the wholesale price index fell 45.9 percent, and the farm products price index fell 53.3 percent. (HSUS, Series E40, E42, and E135)

Real average net income per farm fell over 72.6 percent between 1920 and 1921 and, though rising in the twenties, never recovered the relative levels of 1918 and 1919. (Figure 7) Farm mortgage foreclosures rose and stayed at historically high levels for the entire decade of the 1920s. (Figure 8) The value of farmland and buildings fell throughout the twenties and, for the first time in American history, the number of cultivated acres actually declined as farmers pulled back from the marginal farmland brought into production during the war. Rather than indicators of a general depression in agriculture in the twenties, these were the results of the financial commitments made by overoptimistic American farmers during and directly after the war. The foreclosures were generally on second mortgages rather than on first mortgages as they were in the early 1930s. (Johnson, 1973; Alston, 1983)

A Declining Sector

A major difficulty in analyzing the interwar agricultural sector lies in separating the effects of the 1920-21 and 1929-33 depressions from those that arose because agriculture was declining relative to the other sectors. A relatively very slow growing demand for basic agricultural products and significant increases in the productivity of labor, land, and machinery in agricultural production combined with a much more rapid extensive economic growth in the nonagricultural sectors of the economy required a shift of resources, particularly labor, out of agriculture. (Figure 9) The market induces labor to voluntarily move from one sector to another through income differentials, suggesting that even in the absence of the effects of the depressions, farm incomes would have been lower than nonfarm incomes so as to bring about this migration.

The continuous substitution of tractor power for horse and mule power released hay and oats acreage to grow crops for human consumption. Though cotton and tobacco continued as the primary crops in the south, the relative production of cotton continued to shift to the west as production in Arkansas, Missouri, Oklahoma, Texas, New Mexico, Arizona, and California increased. As quotas reduced immigration and incomes rose, the demand for cereal grains grew slowly—more slowly than the supply—and the demand for fruits, vegetables, and dairy products grew. Refrigeration and faster freight shipments expanded the milk sheds further from metropolitan areas. Wisconsin and other North Central states began to ship cream and cheeses to the Atlantic Coast. Due to transportation improvements, specialized truck farms and the citrus industry became more important in California and Florida. (Parker, 1972; Soule, 1947)

The relative decline of the agricultural sector in this period was closely related to the highly inelastic income elasticity of demand for many farm products, particularly cereal grains, pork, and cotton. As incomes grew, the demand for these staples grew much more slowly. At the same time, rising land and labor productivity were increasing the supplies of staples, causing real prices to fall.

Table 3 presents selected agricultural productivity statistics for these years. Those data indicate that there were greater gains in labor productivity than in land productivity (or per acre yields). Per acre yields in wheat and hay actually decreased between 1915-19 and 1935-39. These productivity increases, which released resources from the agricultural sector, were the result of technological improvements in agriculture.

Technological Improvements In Agricultural Production

In many ways the adoption of the tractor in the interwar period symbolizes the technological changes that occurred in the agricultural sector. This changeover in the power source that farmers used had far-reaching consequences and altered the organization of the farm and the farmers’ lifestyle. The adoption of the tractor was land saving (by releasing acreage previously used to produce crops for workstock) and labor saving. At the same time it increased the risks of farming because farmers were now much more exposed to the marketplace. They could not produce their own fuel for tractors as they had for the workstock. Rather, this had to be purchased from other suppliers. Repair and replacement parts also had to be purchased, and sometimes the repairs had to be undertaken by specialized mechanics. The purchase of a tractor also commonly required the purchase of new complementary machines; therefore, the decision to purchase a tractor was not an isolated one. (White, 2001; Ankli, 1980; Ankli and Olmstead, 1981; Musoke, 1981; Whatley, 1987). These changes resulted in more and more farmers purchasing and using tractors, but the rate of adoption varied sharply across the United States.

Technological innovations in plants and animals also raised productivity. Hybrid seed corn increased yields from an average of 40 bushels per acre to 100 to 120 bushels per acre. New varieties of wheat were developed from the hardy Russian and Turkish wheat varieties which had been imported. The U.S. Department of Agriculture’s Experiment Stations took the lead in developing wheat varieties for different regions. For example, in the Columbia River Basin new varieties raised yields from an average of 19.1 bushels per acre in 1913-22 to 23.1 bushels per acre in 1933-42. (Shepherd, 1980) New hog breeds produced more meat and new methods of swine sanitation sharply increased the survival rate of piglets. An effective serum for hog cholera was developed, and the federal government led the way in the testing and eradication of bovine tuberculosis and brucellosis. Prior to the Second World War, a number of pesticides to control animal disease were developed, including cattle dips and disinfectants. By the mid-1920s a vaccine for “blackleg,” an infectious, usually fatal disease that particularly struck young cattle, was completed. The cattle tick, which carried Texas Fever, was largely controlled through inspections. (Schlebecker, 1975; Bogue, 1983; Wood, 1980)

Federal Agricultural Programs in the 1920s

Though there was substantial agricultural discontent in the period from the Civil War to late 1890s, the period from then to the onset of the First World War was relatively free from overt farmers’ complaints. In later years farmers dubbed the 1910-14 period as agriculture’s “golden years” and used the prices of farm crops and farm inputs in that period as a standard by which to judge crop and input prices in later years. The problems that arose in the agricultural sector during the twenties once again led to insistent demands by farmers for government to alleviate their distress.

Though there were increasing calls for direct federal government intervention to limit production and raise farm prices, this was not used until Roosevelt took office. Rather, there was a reliance upon the traditional method to aid injured groups—tariffs, and upon the “sanctioning and promotion of cooperative marketing associations.” In 1921 Congress attempted to control the grain exchanges and compel merchants and stockyards to charge “reasonable rates,” with the Packers and Stockyards Act and the Grain Futures Act. In 1922 Congress passed the Capper-Volstead Act to promote agricultural cooperatives and the Fordney-McCumber Tariff to impose high duties on most agricultural imports.—The Cooperative Marketing Act of 1924 did not bolster failing cooperatives as it was supposed to do. (Hoffman and Liebcap, 1991)

Twice between 1924 and 1928 Congress passed “McNary-Haugan” bills, but President Calvin Coolidge vetoed both. The McNary-Haugan bills proposed to establish “fair” exchange values (based on the 1910-14 period) for each product and to maintain them through tariffs and a private corporation that would be chartered by the government and could buy enough of each commodity to keep its price up to the computed fair level. The revenues were to come from taxes imposed on farmers. The Hoover administration passed the Hawley-Smoot tariff in 1930 and an Agricultural Marketing Act in 1929. This act committed the federal government to a policy of stabilizing farm prices through several nongovernment institutions but these failed during the depression. Federal intervention in the agricultural sector really came of age during the New Deal era of the 1930s.

Manufacturing

Agriculture was not the only sector experiencing difficulties in the twenties. Other industries, such as textiles, boots and shoes, and coal mining, also experienced trying times. However, at the same time that these industries were declining, other industries, such as electrical appliances, automobiles, and construction, were growing rapidly. The simultaneous existence of growing and declining industries has been common to all eras because economic growth and technological progress never affect all sectors in the same way. In general, in manufacturing there was a rapid rate of growth of productivity during the twenties. The rise of real wages due to immigration restrictions and the slower growth of the resident population spurred this. Transportation improvements and communications advances were also responsible. These developments brought about differential growth in the various manufacturing sectors in the United States in the 1920s.

Because of the historic pattern of economic development in the United States, the northeast was the first area to really develop a manufacturing base. By the mid-nineteenth century the East North Central region was creating a manufacturing base and the other regions began to create manufacturing bases in the last half of the nineteenth century resulting in a relative westward and southern shift of manufacturing activity. This trend continued in the 1920s as the New England and Middle Atlantic regions’ shares of manufacturing employment fell while all of the other regions—excluding the West North Central region—gained. There was considerable variation in the growth of the industries and shifts in their ranking during the decade. The largest broadly defined industries were, not surprisingly, food and kindred products; textile mill products; those producing and fabricating primary metals; machinery production; and chemicals. When industries are more narrowly defined, the automobile industry, which ranked third in manufacturing value added in 1919, ranked first by the mid-1920s.

Productivity Developments

Gavin Wright (1990) has argued that one of the underappreciated characteristics of American industrial history has been its reliance on mineral resources. Wright argues that the growing American strength in industrial exports and industrialization in general relied on an increasing intensity in nonreproducible natural resources. The large American market was knit together as one large market without internal barriers through the development of widespread low-cost transportation. Many distinctively American developments, such as continuous-process, mass-production methods were associated with the “high throughput” of fuel and raw materials relative to labor and capital inputs. As a result the United States became the dominant industrial force in the world 1920s and 1930s. According to Wright, after World War II “the process by which the United States became a unified ‘economy’ in the nineteenth century has been extended to the world as a whole. To a degree, natural resources have become commodities rather than part of the ‘factor endowment’ of individual countries.” (Wright, 1990)

In addition to this growing intensity in the use of nonreproducible natural resources as a source of productivity gains in American manufacturing, other technological changes during the twenties and thirties tended to raise the productivity of the existing capital through the replacement of critical types of capital equipment with superior equipment and through changes in management methods. (Soule, 1947; Lorant, 1967; Devine, 1983; Oshima, 1984) Some changes, such as the standardization of parts and processes and the reduction of the number of styles and designs, raised the productivity of both capital and labor. Modern management techniques, first introduced by Frederick W. Taylor, were introduced on a wider scale.

One of the important forces contributing to mass production and increased productivity was the transfer to electric power. (Devine, 1983) By 1929 about 70 percent of manufacturing activity relied on electricity, compared to roughly 30 percent in 1914. Steam provided 80 percent of the mechanical drive capacity in manufacturing in 1900, but electricity provided over 50 percent by 1920 and 78 percent by 1929. An increasing number of factories were buying their power from electric utilities. In 1909, 64 percent of the electric motor capacity in manufacturing establishments used electricity generated on the factory site; by 1919, 57 percent of the electricity used in manufacturing was purchased from independent electric utilities.

The shift from coal to oil and natural gas and from raw unprocessed energy in the forms of coal and waterpower to processed energy in the form of internal combustion fuel and electricity increased thermal efficiency. After the First World War energy consumption relative to GNP fell, there was a sharp increase in the growth rate of output per labor-hour, and the output per unit of capital input once again began rising. These trends can be seen in the data in Table 3. Labor productivity grew much more rapidly during the 1920s than in the previous or following decade. Capital productivity had declined in the decade previous to the 1920s while it also increased sharply during the twenties and continued to rise in the following decade. Alexander Field (2003) has argued that the 1930s were the most technologically progressive decade of the twentieth century basing his argument on the growth of multi-factor productivity as well as the impressive array of technological developments during the thirties. However, the twenties also saw impressive increases in labor and capital productivity as, particularly, developments in energy and transportation accelerated.

Warren Devine, Jr. (1983) reports that in the twenties the most important result of the adoption of electricity was that it would be an indirect “lever to increase production.” There were a number of ways in which this occurred. Electricity brought about an increased flow of production by allowing new flexibility in the design of buildings and the arrangement of machines. In this way it maximized throughput. Electric cranes were an “inestimable boon” to production because with adequate headroom they could operate anywhere in a plant, something that mechanical power transmission to overhead cranes did not allow. Electricity made possible the use of portable power tools that could be taken anywhere in the factory. Electricity brought about improved illumination, ventilation, and cleanliness in the plants, dramatically improving working conditions. It improved the control of machines since there was no longer belt slippage with overhead line shafts and belt transmission, and there were less limitations on the operating speeds of machines. Finally, it made plant expansion much easier than when overhead shafts and belts had been relied upon for operating power.

The mechanization of American manufacturing accelerated in the 1920s, and this led to a much more rapid growth of productivity in manufacturing compared to earlier decades and to other sectors at that time. There were several forces that promoted mechanization. One was the rapidly expanding aggregate demand during the prosperous twenties. Another was the technological developments in new machines and processes, of which electrification played an important part. Finally, Harry Jerome (1934) and, later, Harry Oshima (1984) both suggest that the price of unskilled labor began to rise as immigration sharply declined with new immigration laws and falling population growth. This accelerated the mechanization of the nation’s factories.

Technological changes during this period can be documented for a number of individual industries. In bituminous coal mining, labor productivity rose when mechanical loading devices reduced the labor required from 24 to 50 percent. The burst of paved road construction in the twenties led to the development of a finishing machine to smooth the surface of cement highways, and this reduced the labor requirement from 40 to 60 percent. Mechanical pavers that spread centrally mixed materials further increased productivity in road construction. These replaced the roadside dump and wheelbarrow methods of spreading the cement. Jerome (1934) reports that the glass in electric light bulbs was made by new machines that cut the number of labor-hours required for their manufacture by nearly half. New machines to produce cigarettes and cigars, for warp-tying in textile production, and for pressing clothes in clothing shops also cut labor-hours. The Banbury mixer reduced the labor input in the production of automobile tires by half, and output per worker of inner tubes increased about four times with a new production method. However, as Daniel Nelson (1987) points out, the continuing advances were the “cumulative process resulting from a vast number of successive small changes.” Because of these continuing advances in the quality of the tires and in the manufacturing of tires, between 1910 and 1930 “tire costs per thousand miles of driving fell from $9.39 to $0.65.”

John Lorant (1967) has documented other technological advances that occurred in American manufacturing during the twenties. For example, the organic chemical industry developed rapidly due to the introduction of the Weizman fermentation process. In a similar fashion, nearly half of the productivity advances in the paper industry were due to the “increasingly sophisticated applications of electric power and paper manufacturing processes,” especially the fourdrinier paper-making machines. As Avi Cohen (1984) has shown, the continuing advances in these machines were the result of evolutionary changes to the basic machine. Mechanization in many types of mass-production industries raised the productivity of labor and capital. In the glass industry, automatic feeding and other types of fully automatic production raised the efficiency of the production of glass containers, window glass, and pressed glass. Giedion (1948) reported that the production of bread was “automatized” in all stages during the 1920s.

Though not directly bringing about productivity increases in manufacturing processes, developments in the management of manufacturing firms, particularly the largest ones, also significantly affected their structure and operation. Alfred D. Chandler, Jr. (1962) has argued that the structure of a firm must follow its strategy. Until the First World War most industrial firms were centralized, single-division firms even when becoming vertically integrated. When this began to change the management of the large industrial firms had to change accordingly.

Because of these changes in the size and structure of the firm during the First World War, E. I. du Pont de Nemours and Company was led to adopt a strategy of diversifying into the production of largely unrelated product lines. The firm found that the centralized, divisional structure that had served it so well was not suited to this strategy, and its poor business performance led its executives to develop between 1919 and 1921 a decentralized, multidivisional structure that boosted it to the first rank among American industrial firms.

General Motors had a somewhat different problem. By 1920 it was already decentralized into separate divisions. In fact, there was so much decentralization that those divisions essentially remained separate companies and there was little coordination between the operating divisions. A financial crisis at the end of 1920 ousted W. C. Durant and brought in the du Ponts and Alfred Sloan. Sloan, who had seen the problems at GM but had been unable to convince Durant to make changes, began reorganizing the management of the company. Over the next several years Sloan and other GM executives developed the general office for a decentralized, multidivisional firm.

Though facing related problems at nearly the same time, GM and du Pont developed their decentralized, multidivisional organizations separately. As other manufacturing firms began to diversify, GM and du Pont became the models for reorganizing the management of the firms. In many industrial firms these reorganizations were not completed until well after the Second World War.

Competition, Monopoly, and the Government

The rise of big businesses, which accelerated in the postbellum period and particularly during the first great turn-of-the-century merger wave, continued in the interwar period. Between 1925 and 1939 the share of manufacturing assets held by the 100 largest corporations rose from 34.5 to 41.9 percent. (Niemi, 1980) As a public policy, the concern with monopolies diminished in the 1920s even though firms were growing larger. But the growing size of businesses was one of the convenient scapegoats upon which to blame the Great Depression.

However, the rise of large manufacturing firms in the interwar period is not so easily interpreted as an attempt to monopolize their industries. Some of the growth came about through vertical integration by the more successful manufacturing firms. Backward integration was generally an attempt to ensure a smooth supply of raw materials where that supply was not plentiful and was dispersed and firms “feared that raw materials might become controlled by competitors or independent suppliers.” (Livesay and Porter, 1969) Forward integration was an offensive tactic employed when manufacturers found that the existing distribution network proved inadequate. Livesay and Porter suggested a number of reasons why firms chose to integrate forward. In some cases they had to provide the mass distribution facilities to handle their much larger outputs; especially when the product was a new one. The complexity of some new products required technical expertise that the existing distribution system could not provide. In other cases “the high unit costs of products required consumer credit which exceeded financial capabilities of independent distributors.” Forward integration into wholesaling was more common than forward integration into retailing. The producers of automobiles, petroleum, typewriters, sewing machines, and harvesters were typical of those manufacturers that integrated all the way into retailing.

In some cases, increases in industry concentration arose as a natural process of industrial maturation. In the automobile industry, Henry Ford’s invention in 1913 of the moving assembly line—a technological innovation that changed most manufacturing—lent itself to larger factories and firms. Of the several thousand companies that had produced cars prior to 1920, 120 were still doing so then, but Ford and General Motors were the clear leaders, together producing nearly 70 percent of the cars. During the twenties, several other companies, such as Durant, Willys, and Studebaker, missed their opportunity to become more important producers, and Chrysler, formed in early 1925, became the third most important producer by 1930. Many went out of business and by 1929 only 44 companies were still producing cars. The Great Depression decimated the industry. Dozens of minor firms went out of business. Ford struggled through by relying on its huge stockpile of cash accumulated prior to the mid-1920s, while Chrysler actually grew. By 1940, only eight companies still produced cars—GM, Ford, and Chrysler had about 85 percent of the market, while Willys, Studebaker, Nash, Hudson, and Packard shared the remainder. The rising concentration in this industry was not due to attempts to monopolize. As the industry matured, growing economies of scale in factory production and vertical integration, as well as the advantages of a widespread dealer network, led to a dramatic decrease in the number of viable firms. (Chandler, 1962 and 1964; Rae, 1984; Bernstein, 1987)

It was a similar story in the tire industry. The increasing concentration and growth of firms was driven by scale economies in production and retailing and by the devastating effects of the depression in the thirties. Although there were 190 firms in 1919, 5 firms dominated the industry—Goodyear, B. F. Goodrich, Firestone, U.S. Rubber, and Fisk, followed by Miller Rubber, General Tire and Rubber, and Kelly-Springfield. During the twenties, 166 firms left the industry while 66 entered. The share of the 5 largest firms rose from 50 percent in 1921 to 75 percent in 1937. During the depressed thirties, there was fierce price competition, and many firms exited the industry. By 1937 there were 30 firms, but the average employment per factory was 4.41 times as large as in 1921, and the average factory produced 6.87 times as many tires as in 1921. (French, 1986 and 1991; Nelson, 1987; Fricke, 1982)

The steel industry was already highly concentrated by 1920 as U.S. Steel had around 50 percent of the market. But U. S. Steel’s market share declined through the twenties and thirties as several smaller firms competed and grew to become known as Little Steel, the next six largest integrated producers after U. S. Steel. Jonathan Baker (1989) has argued that the evidence is consistent with “the assumption that competition was a dominant strategy for steel manufacturers” until the depression. However, the initiation of the National Recovery Administration (NRA) codes in 1933 required the firms to cooperate rather than compete, and Baker argues that this constituted a training period leading firms to cooperate in price and output policies after 1935. (McCraw and Reinhardt, 1989; Weiss, 1980; Adams, 1977)

Mergers

A number of the larger firms grew by merger during this period, and the second great merger wave in American industry occurred during the last half of the 1920s. Figure 10 shows two series on mergers during the interwar period. The FTC series included many of the smaller mergers. The series constructed by Carl Eis (1969) only includes the larger mergers and ends in 1930.

This second great merger wave coincided with the stock market boom of the twenties and has been called “merger for oligopoly” rather than merger for monopoly. (Stigler, 1950) This merger wave created many larger firms that ranked below the industry leaders. Much of the activity in occurred in the banking and public utilities industries. (Markham, 1955) In manufacturing and mining, the effects on industrial structure were less striking. Eis (1969) found that while mergers took place in almost all industries, they were concentrated in a smaller number of them, particularly petroleum, primary metals, and food products.

The federal government’s antitrust policies toward business varied sharply during the interwar period. In the 1920s there was relatively little activity by the Justice Department, but after the Great Depression the New Dealers tried to take advantage of big business to make business exempt from the antitrust laws and cartelize industries under government supervision.

With the passage of the FTC and Clayton Acts in 1914 to supplement the 1890 Sherman Act, the cornerstones of American antitrust law were complete. Though minor amendments were later enacted, the primary changes after that came in the enforcement of the laws and in swings in judicial decisions. Their two primary areas of application were in the areas of overt behavior, such as horizontal and vertical price-fixing, and in market structure, such as mergers and dominant firms. Horizontal price-fixing involves firms that would normally be competitors getting together to agree on stable and higher prices for their products. As long as most of the important competitors agree on the new, higher prices, substitution between products is eliminated and the demand becomes much less elastic. Thus, increasing the price increases the revenues and the profits of the firms who are fixing prices. Vertical price-fixing involves firms setting the prices of intermediate products purchased at different stages of production. It also tends to eliminate substitutes and makes the demand less elastic.

Price-fixing continued to be considered illegal throughout the period, but there was no major judicial activity regarding it in the 1920s other than the Trenton Potteries decision in 1927. In that decision 20 individuals and 23 corporations were found guilty of conspiring to fix the prices of bathroom bowls. The evidence in the case suggested that the firms were not very successful at doing so, but the court found that they were guilty nevertheless; their success, or lack thereof, was not held to be a factor in the decision. (Scherer and Ross, 1990) Though criticized by some, the decision was precedent setting in that it prohibited explicit pricing conspiracies per se.

The Justice Department had achieved success in dismantling Standard Oil and American Tobacco in 1911 through decisions that the firms had unreasonably restrained trade. These were essentially the same points used in court decisions against the Powder Trust in 1911, the thread trust in 1913, Eastman Kodak in 1915, the glucose and cornstarch trust in 1916, and the anthracite railroads in 1920. The criterion of an unreasonable restraint of trade was used in the 1916 and 1918 decisions that found the American Can Company and the United Shoe Machinery Company innocent of violating the Sherman Act; it was also clearly enunciated in the 1920 U. S. Steel decision. This became known as the rule of reason standard in antitrust policy.

Merger policy had been defined in the 1914 Clayton Act to prohibit only the acquisition of one corporation’s stock by another corporation. Firms then shifted to the outright purchase of a competitor’s assets. A series of court decisions in the twenties and thirties further reduced the possibilities of Justice Department actions against mergers. “Only fifteen mergers were ordered dissolved through antitrust actions between 1914 and 1950, and ten of the orders were accomplished under the Sherman Act rather than Clayton Act proceedings.”

Energy

The search for energy and new ways to translate it into heat, light, and motion has been one of the unending themes in history. From whale oil to coal oil to kerosene to electricity, the search for better and less costly ways to light our lives, heat our homes, and move our machines has consumed much time and effort. The energy industries responded to those demands and the consumption of energy materials (coal, oil, gas, and fuel wood) as a percent of GNP rose from about 2 percent in the latter part of the nineteenth century to about 3 percent in the twentieth.

Changes in the energy markets that had begun in the nineteenth century continued. Processed energy in the forms of petroleum derivatives and electricity continued to become more important than “raw” energy, such as that available from coal and water. The evolution of energy sources for lighting continued; at the end of the nineteenth century, natural gas and electricity, rather than liquid fuels began to provide more lighting for streets, businesses, and homes.

In the twentieth century the continuing shift to electricity and internal combustion fuels increased the efficiency with which the American economy used energy. These processed forms of energy resulted in a more rapid increase in the productivity of labor and capital in American manufacturing. From 1899 to 1919, output per labor-hour increased at an average annual rate of 1.2 percent, whereas from 1919 to 1937 the increase was 3.5 percent per year. The productivity of capital had fallen at an average annual rate of 1.8 percent per year in the 20 years prior to 1919, but it rose 3.1 percent a year in the 18 years after 1919. As discussed above, the adoption of electricity in American manufacturing initiated a rapid evolution in the organization of plants and rapid increases in productivity in all types of manufacturing.

The change in transportation was even more remarkable. Internal combustion engines running on gasoline or diesel fuel revolutionized transportation. Cars quickly grabbed the lion’s share of local and regional travel and began to eat into long distance passenger travel, just as the railroads had done to passenger traffic by water in the 1830s. Even before the First World War cities had begun passing laws to regulate and limit “jitney” services and to protect the investments in urban rail mass transit. Trucking began eating into the freight carried by the railroads.

These developments brought about changes in the energy industries. Coal mining became a declining industry. As Figure 11 shows, in 1925 the share of petroleum in the value of coal, gas, and petroleum output exceeded bituminous coal, and it continued to rise. Anthracite coal’s share was much smaller and it declined while natural gas and LP (or liquefied petroleum) gas were relatively unimportant. These changes, especially the declining coal industry, were the source of considerable worry in the twenties.

Coal

One of the industries considered to be “sick” in the twenties was coal, particularly bituminous, or soft, coal. Income in the industry declined, and bankruptcies were frequent. Strikes frequently interrupted production. The majority of the miners “lived in squalid and unsanitary houses, and the incidence of accidents and diseases was high.” (Soule, 1947) The number of operating bituminous coal mines declined sharply from 1923 through 1932. Anthracite (or hard) coal output was much smaller during the twenties. Real coal prices rose from 1919 to 1922, and bituminous coal prices fell sharply from then to 1925. (Figure 12) Coal mining employment plummeted during the twenties. Annual earnings, especially in bituminous coal mining, also fell because of dwindling hourly earnings and, from 1929 on, a shrinking workweek. (Figure 13)

The sources of these changes are to be found in the increasing supply due to productivity advances in coal production and in the decreasing demand for coal. The demand fell as industries began turning from coal to electricity and because of productivity advances in the use of coal to create energy in steel, railroads, and electric utilities. (Keller, 1973) In the generation of electricity, larger steam plants employing higher temperatures and steam pressures continued to reduce coal consumption per kilowatt hour. Similar reductions were found in the production of coke from coal for iron and steel production and in the use of coal by the steam railroad engines. (Rezneck, 1951) All of these factors reduced the demand for coal.

Productivity advances in coal mining tended to be labor saving. Mechanical cutting accounted for 60.7 percent of the coal mined in 1920 and 78.4 percent in 1929. By the middle of the twenties, the mechanical loading of coal began to be introduced. Between 1929 and 1939, output per labor-hour rose nearly one third in bituminous coal mining and nearly four fifths in anthracite as more mines adopted machine mining and mechanical loading and strip mining expanded.

The increasing supply and falling demand for coal led to the closure of mines that were too costly to operate. A mine could simply cease operations, let the equipment stand idle, and lay off employees. When bankruptcies occurred, the mines generally just turned up under new ownership with lower capital charges. When demand increased or strikes reduced the supply of coal, idle mines simply resumed production. As a result, the easily expanded supply largely eliminated economic profits.

The average daily employment in coal mining dropped by over 208,000 from its peak in 1923, but the sharply falling real wages suggests that the supply of labor did not fall as rapidly as the demand for labor. Soule (1947) notes that when employment fell in coal mining, it meant fewer days of work for the same number of men. Social and cultural characteristics tended to tie many to their home region. The local alternatives were few, and ignorance of alternatives outside the Appalachian rural areas, where most bituminous coal was mined, made it very costly to transfer out.

Petroleum

In contrast to the coal industry, the petroleum industry was growing throughout the interwar period. By the thirties, crude petroleum dominated the real value of the production of energy materials. As Figure 14 shows, the production of crude petroleum increased sharply between 1920 and 1930, while real petroleum prices, though highly variable, tended to decline.

The growing demand for petroleum was driven by the growth in demand for gasoline as America became a motorized society. The production of gasoline surpassed kerosene production in 1915. Kerosene’s market continued to contract as electric lighting replaced kerosene lighting. The development of oil burners in the twenties began a switch from coal toward fuel oil for home heating, and this further increased the growing demand for petroleum. The growth in the demand for fuel oil and diesel fuel for ship engines also increased petroleum demand. But it was the growth in the demand for gasoline that drove the petroleum market.

The decline in real prices in the latter part of the twenties shows that supply was growing even faster than demand. The discovery of new fields in the early twenties increased the supply of petroleum and led to falling prices as production capacity grew. The Santa Fe Springs, California strike in 1919 initiated a supply shock as did the discovery of the Long Beach, California field in 1921. New discoveries in Powell, Texas and Smackover Arkansas further increased the supply of petroleum in 1921. New supply increases occurred in 1926 to 1928 with petroleum strikes in Seminole, Oklahoma and Hendricks, Texas. The supply of oil increased sharply in 1930 to 1931 with new discoveries in Oklahoma City and East Texas. Each new discovery pushed down real oil prices, and the prices of petroleum derivatives, and the growing production capacity led to a general declining trend in petroleum prices. McMillin and Parker (1994) argue that supply shocks generated by these new discoveries were a factor in the business cycles during the 1920s.

The supply of gasoline increased more than the supply of crude petroleum. In 1913 a chemist at Standard Oil of Indiana introduced the cracking process to refine crude petroleum; until that time it had been refined by distillation or unpressurized heating. In the heating process, various refined products such as kerosene, gasoline, naphtha, and lubricating oils were produced at different temperatures. It was difficult to vary the amount of the different refined products produced from a barrel of crude. The cracking process used pressurized heating to break heavier components down into lighter crude derivatives; with cracking, it was possible to increase the amount of gasoline obtained from a barrel of crude from 15 to 45 percent. In the early twenties, chemists at Standard Oil of New Jersey improved the cracking process, and by 1927 it was possible to obtain twice as much gasoline from a barrel of crude petroleum as in 1917.

The petroleum companies also developed new ways to distribute gasoline to motorists that made it more convenient to purchase gasoline. Prior to the First World War, gasoline was commonly purchased in one- or five-gallon cans and the purchaser used a funnel to pour the gasoline from the can into the car. Then “filling stations” appeared, which specialized in filling cars’ tanks with gasoline. These spread rapidly, and by 1919 gasoline companies werebeginning to introduce their own filling stations or contract with independent stations to exclusively distribute their gasoline. Increasing competition and falling profits led filling station operators to expand into other activities such as oil changes and other mechanical repairs. The general name attached to such stations gradually changed to “service stations” to reflect these new functions.

Though the petroleum firms tended to be large, they were highly competitive, trying to pump as much petroleum as possible to increase their share of the fields. This, combined with the development of new fields, led to an industry with highly volatile prices and output. Firms desperately wanted to stabilize and reduce the production of crude petroleum so as to stabilize and raise the prices of crude petroleum and refined products. Unable to obtain voluntary agreement on output limitations by the firms and producers, governments began stepping in. Led by Texas, which created the Texas Railroad Commission in 1891, oil-producing states began to intervene to regulate production. Such laws were usually termed prorationing laws and were quotas designed to limit each well’s output to some fraction of its potential. The purpose was as much to stabilize and reduce production and raise prices as anything else, although generally such laws were passed under the guise of conservation. Although the federal government supported such attempts, not until the New Deal were federal laws passed to assist this.

Electricity

By the mid 1890s the debate over the method by which electricity was to be transmitted had been won by those who advocated alternating current. The reduced power losses and greater distance over which electricity could be transmitted more than offset the necessity for transforming the current back to direct current for general use. Widespread adoption of machines and appliances by industry and consumers then rested on an increase in the array of products using electricity as the source of power, heat, or light and the development of an efficient, lower cost method of generating electricity.

General Electric, Westinghouse, and other firms began producing the electrical appliances for homes and an increasing number of machines based on electricity began to appear in industry. The problem of lower cost production was solved by the introduction of centralized generating facilities that distributed the electric power through lines to many consumers and business firms.

Though initially several firms competed in generating and selling electricity to consumers and firms in a city or area, by the First World War many states and communities were awarding exclusive franchises to one firm to generate and distribute electricity to the customers in the franchise area. (Bright, 1947; Passer, 1953) The electric utility industry became an important growth industry and, as Figure 15 shows, electricity production and use grew rapidly.

The electric utilities increasingly were regulated by state commissions that were charged with setting rates so that the utilities could receive a “fair return” on their investments. Disagreements over what constituted a “fair return” and the calculation of the rate base led to a steady stream of cases before the commissions and a continuing series of court appeals. Generally these court decisions favored the reproduction cost basis. Because of the difficulty and cost in making these calculations, rates tended to be in the hands of the electric utilities that, it has been suggested, did not lower rates adequately to reflect the rising productivity and lowered costs of production. The utilities argued that a more rapid lowering of rates would have jeopardized their profits. Whether or not this increased their monopoly power is still an open question, but it should be noted, that electric utilities were hardly price-taking industries prior to regulation. (Mercer, 1973) In fact, as Figure 16 shows, the electric utilities began to systematically practice market segmentation charging users with less elastic demands, higher prices per kilowatt-hour.

Energy in the American Economy of the 1920s

The changes in the energy industries had far-reaching consequences. The coal industry faced a continuing decline in demand. Even in the growing petroleum industry, the periodic surges in the supply of petroleum caused great instability. In manufacturing, as described above, electrification contributed to a remarkable rise in productivity. The transportation revolution brought about by the rise of gasoline-powered trucks and cars changed the way businesses received their supplies and distributed their production as well as where they were located. The suburbanization of America and the beginnings of urban sprawl were largely brought about by the introduction of low-priced gasoline for cars.

Transportation

The American economy was forever altered by the dramatic changes in transportation after 1900. Following Henry Ford’s introduction of the moving assembly production line in 1914, automobile prices plummeted, and by the end of the 1920s about 60 percent of American families owned an automobile. The advent of low-cost personal transportation led to an accelerating movement of population out of the crowded cities to more spacious homes in the suburbs and the automobile set off a decline in intracity public passenger transportation that has yet to end. Massive road-building programs facilitated the intercity movement of people and goods. Trucks increasingly took over the movement of freight in competition with the railroads. New industries, such as gasoline service stations, motor hotels, and the rubber tire industry, arose to service the automobile and truck traffic. These developments were complicated by the turmoil caused by changes in the federal government’s policies toward transportation in the United States.

With the end of the First World War, a debate began as to whether the railroads, which had been taken over by the government, should be returned to private ownership or nationalized. The voices calling for a return to private ownership were much stronger, but doing so fomented great controversy. Many in Congress believed that careful planning and consolidation could restore the railroads and make them more efficient. There was continued concern about the near monopoly that the railroads had on the nation’s intercity freight and passenger transportation. The result of these deliberations was the Transportation Act of 1920, which was premised on the continued domination of the nation’s transportation by the railroads—an erroneous presumption.

The Transportation Act of 1920 presented a marked change in the Interstate Commerce Commission’s ability to control railroads. The ICC was allowed to prescribe exact rates that were to be set so as to allow the railroads to earn a fair return, defined as 5.5 percent, on the fair value of their property. The ICC was authorized to make an accounting of the fair value of each regulated railroad’s property; however, this was not completed until well into the 1930s, by which time the accounting and rate rules were out of date. To maintain fair competition between railroads in a region, all roads were to have the same rates for the same goods over the same distance. With the same rates, low-cost roads should have been able to earn higher rates of return than high-cost roads. To handle this, a recapture clause was inserted: any railroad earning a return of more than 6 percent on the fair value of its property was to turn the excess over to the ICC, which would place half of the money in a contingency fund for the railroad when it encountered financial problems and the other half in a contingency fund to provide loans to other railroads in need of assistance.

In order to address the problem of weak and strong railroads and to bring better coordination to the movement of rail traffic in the United States, the act was directed to encourage railroad consolidation, but little came of this in the 1920s. In order to facilitate its control of the railroads, the ICC was given two additional powers. The first was the control over the issuance or purchase of securities by railroads, and the second was the power to control changes in railroad service through the control of car supply and the extension and abandonment of track. The control of the supply of rail cars was turned over to the Association of American Railroads. Few extensions of track were proposed, but as time passed, abandonment requests grew. The ICC, however, trying to mediate between the conflicting demands of shippers, communities and railroads, generally refused to grant abandonments, and this became an extremely sensitive issue in the 1930s.

As indicated above, the premises of the Transportation Act of 1920 were wrong. Railroads experienced increasing competition during the 1920s, and both freight and passenger traffic were drawn off to competing transport forms. Passenger traffic exited from the railroads much more quickly. As the network of all weather surfaced roads increased, people quickly turned from the train to the car. Harmed even more by the move to automobile traffic were the electric interurban railways that had grown rapidly just prior to the First World War. (Hilton-Due, 1960) Not surprisingly, during the 1920s few railroads earned profits in excess of the fair rate of return.

The use of trucks to deliver freight began shortly after the turn of the century. Before the outbreak of war in Europe, White and Mack were producing trucks with as much as 7.5 tons of carrying capacity. Most of the truck freight was carried on a local basis, and it largely supplemented the longer distance freight transportation provided by the railroads. However, truck size was growing. In 1915 Trailmobile introduced the first four-wheel trailer designed to be pulled by a truck tractor unit. During the First World War, thousands of trucks were constructed for military purposes, and truck convoys showed that long distance truck travel was feasible and economical. The use of trucks to haul freight had been growing by over 18 percent per year since 1925, so that by 1929 intercity trucking accounted for more than one percent of the ton-miles of freight hauled.

The railroads argued that the trucks and buses provided “unfair” competition and believed that if they were also regulated, then the regulation could equalize the conditions under which they competed. As early as 1925, the National Association of Railroad and Utilities Commissioners issued a call for the regulation of motor carriers in general. In 1928 the ICC called for federal regulation of buses and in 1932 extended this call to federal regulation of trucks.

Most states had began regulating buses at the beginning of the 1920s in an attempt to reduce the diversion of urban passenger traffic from the electric trolley and railway systems. However, most of the regulation did not aim to control intercity passenger traffic by buses. As the network of surfaced roads expanded during the twenties, so did the routes of the intercity buses. In 1929 a number of smaller bus companies were incorporated in the Greyhound Buslines, the carrier that has since dominated intercity bus transportation. (Walsh, 2000)

A complaint of the railroads was that interstate trucking competition was unfair because it was subsidized while railroads were not. All railroad property was privately owned and subject to property taxes, whereas truckers used the existing road system and therefore neither had to bear the costs of creating the road system nor pay taxes upon it. Beginning with the Federal Road-Aid Act of 1916, small amounts of money were provided as an incentive for states to construct rural post roads. (Dearing-Owen, 1949) However, through the First World War most of the funds for highway construction came from a combination of levies on the adjacent property owners and county and state taxes. The monies raised by the counties were commonly 60 percent of the total funds allocated, and these primarily came from property taxes. In 1919 Oregon pioneered the state gasoline tax, which then began to be adopted by more and more states. A highway system financed by property taxes and other levies can be construed as a subsidization of motor vehicles, and one study for the period up to 1920 found evidence of substantial subsidization of trucking. (Herbst-Wu, 1973) However, the use of gasoline taxes moved closer to the goal of users paying the costs of the highways. Neither did the trucks have to pay for all of the highway construction because automobiles jointly used the highways. Highways had to be constructed in more costly ways in order to accommodate the larger and heavier trucks. Ideally the gasoline taxes collected from trucks should have covered the extra (or marginal) costs of highway construction incurred because of the truck traffic. Gasoline taxes tended to do this.

The American economy occupies a vast geographic region. Because economic activity occurs over most of the country, falling transportation costs have been crucial to knitting American firms and consumers into a unified market. Throughout the nineteenth century the railroads played this crucial role. Because of the size of the railroad companies and their importance in the economic life of Americans, the federal government began to regulate them. But, by 1917 it appeared that the railroad system had achieved some stability, and it was generally assumed that the post-First World War era would be an extension of the era from 1900 to 1917. Nothing could have been further from the truth. Spurred by public investments in highways, cars and trucks voraciously ate into the railroad’s market, and, though the regulators failed to understand this at the time, the railroad’s monopoly on transportation quickly disappeared.

Communications

Communications had joined with transportation developments in the nineteenth century to tie the American economy together more completely. The telegraph had benefited by using the railroads’ right-of-ways, and the railroads used the telegraph to coordinate and organize their far-flung activities. As the cost of communications fell and information transfers sped, the development of firms with multiple plants at distant locations was facilitated. The interwar era saw a continuation of these developments as the telephone continued to supplant the telegraph and the new medium of radio arose to transmit news and provide a new entertainment source.

Telegraph domination of business and personal communications had given way to the telephone as long distance telephone calls between the east and west coasts with the new electronic amplifiers became possible in 1915. The number of telegraph messages handled grew 60.4 percent in the twenties. The number of local telephone conversations grew 46.8 percent between 1920 and 1930, while the number of long distance conversations grew 71.8 percent over the same period. There were 5 times as many long distance telephone calls as telegraph messages handled in 1920, and 5.7 times as many in 1930.

The twenties were a prosperous period for AT&T and its 18 major operating companies. (Brooks, 1975; Temin, 1987; Garnet, 1985; Lipartito, 1989) Telephone usage rose and, as Figure 19 shows, the share of all households with a telephone rose from 35 percent to nearly 42 percent. In cities across the nation, AT&T consolidated its system, gained control of many operating companies, and virtually eliminated its competitors. It was able to do this because in 1921 Congress passed the Graham Act exempting AT&T from the Sherman Act in consolidating competing telephone companies. By 1940, the non-Bell operating companies were all small relative to the Bell operating companies.

Surprisingly there was a decline in telephone use on the farms during the twenties. (Hadwiger-Cochran, 1984; Fischer 1987) Rising telephone rates explain part of the decline in rural use. The imposition of connection fees during the First World War made it more costly for new farmers to hook up. As AT&T gained control of more and more operating systems, telephone rates were increased. AT&T also began requiring, as a condition of interconnection, that independent companies upgrade their systems to meet AT&T standards. Most of the small mutual companies that had provided service to farmers had operated on a shoestring—wires were often strung along fenceposts, and phones were inexpensive “whoop and holler” magneto units. Upgrading to AT&T’s standards raised costs, forcing these companies to raise rates.

However, it also seems likely that during the 1920s there was a general decline in the rural demand for telephone services. One important factor in this was the dramatic decline in farm incomes in the early twenties. The second reason was a change in the farmers’ environment. Prior to the First World War, the telephone eased farm isolation and provided news and weather information that was otherwise hard to obtain. After 1920 automobiles, surfaced roads, movies, and the radio loosened the isolation and the telephone was no longer as crucial.

Othmar Merganthaler’s development of the linotype machine in the late nineteenth century had irrevocably altered printing and publishing. This machine, which quickly created a line of soft, lead-based metal type that could be printed, melted down and then recast as a new line of type, dramatically lowered the costs of printing. Previously, all type had to be painstakingly set by hand, with individual cast letter matrices picked out from compartments in drawers to construct words, lines, and paragraphs. After printing, each line of type on the page had to be broken down and each individual letter matrix placed back into its compartment in its drawer for use in the next printing job. Newspapers often were not published every day and did not contain many pages, resulting in many newspapers in most cities. In contrast to this laborious process, the linotype used a keyboard upon which the operator typed the words in one of the lines in a news column. Matrices for each letter dropped down from a magazine of matrices as the operator typed each letter and were assembled into a line of type with automatic spacers to justify the line (fill out the column width). When the line was completed the machine mechanically cast the line of matrices into a line of lead type. The line of lead type was ejected into a tray and the letter matrices mechanically returned to the magazine while the operator continued typing the next line in the news story. The first Merganthaler linotype machine was installed in the New York Tribune in 1886. The linotype machine dramatically lowered the costs of printing newspapers (as well as books and magazines). Prior to the linotype a typical newspaper averaged no more than 11 pages and many were published only a few times a week. The linotype machine allowed newspapers to grow in size and they began to be published more regularly. A process of consolidation of daily and Sunday newspapers began that continues to this day. Many have termed the Merganthaler linotype machine the most significant printing invention since the introduction of movable type 400 years earlier.

For city families as well as farm families, radio became the new source of news and entertainment. (Barnouw, 1966; Rosen, 1980 and 1987; Chester-Garrison, 1950) It soon took over as the prime advertising medium and in the process revolutionized advertising. By 1930 more homes had radio sets than had telephones. The radio networks sent news and entertainment broadcasts all over the country. The isolation of rural life, particularly in many areas of the plains, was forever broken by the intrusion of the “black box,” as radio receivers were often called. The radio began a process of breaking down regionalism and creating a common culture in the United States.

The potential demand for radio became clear with the first regular broadcast of Westinghouse’s KDKA in Pittsburgh in the fall of 1920. Because the Department of Commerce could not deny a license application there was an explosion of stations all broadcasting at the same frequency and signal jamming and interference became a serious problem. By 1923 the Department of Commerce had gained control of radio from the Post Office and the Navy and began to arbitrarily disperse stations on the radio dial and deny licenses creating the first market in commercial broadcast licenses. In 1926 a U.S. District Court decided that under the Radio Law of 1912 Herbert Hoover, the secretary of commerce, did not have this power. New stations appeared and the logjam and interference of signals worsened. A Radio Act was passed in January of 1927 creating the Federal Radio Commission (FRC) as a temporary licensing authority. Licenses were to be issued in the public interest, convenience, and necessity. A number of broadcasting licenses were revoked; stations were assigned frequencies, dial locations, and power levels. The FRC created 24 clear-channel stations with as much as 50,000 watts of broadcasting power, of which 21 ended up being affiliated with the new national radio networks. The Communications Act of 1934 essentially repeated the 1927 act except that it created a permanent, seven-person Federal Communications Commission (FCC).

Local stations initially created and broadcast the radio programs. The expenses were modest, and stores and companies operating radio stations wrote this off as indirect, goodwill advertising. Several forces changed all this. In 1922, AT&T opened up a radio station in New York City, WEAF (later to become WNBC). AT&T envisioned this station as the center of a radio toll system where individuals could purchase time to broadcast a message transmitted to other stations in the toll network using AT&T’s long distance lines and an August 1922 broadcast by a Long Island realty company became the first conscious use of direct advertising.

Though advertising continued to be condemned, the fiscal pressures on radio stations to accept advertising began rising. In 1923 the American Society of Composers and Publishers (ASCAP), began demanding a performance fee anytime ASCAP-copyrighted music was performed on the radio, either live or on record. By 1924 the issue was settled, and most stations began paying performance fees to ASCAP. AT&T decided that all stations broadcasting with non AT&T transmitters were violating their patent rights and began asking for annual fees from such stations based on the station’s power. By the end of 1924, most stations were paying the fees. All of this drained the coffers of the radio stations, and more and more of them began discreetly accepting advertising.

RCA became upset at AT&T’s creation of a chain of radio stations and set up its own toll network using the inferior lines of Western Union and Postal Telegraph, because AT&T, not surprisingly, did not allow any toll (or network) broadcasting on its lines except by its own stations. AT&T began to worry that its actions might threaten its federal monopoly in long distance telephone communications. In 1926 a new firm was created, the National Broadcasting Company (NBC), which took over all broadcasting activities from AT&T and RCA as AT&T left broadcasting. When NBC debuted in November of 1926, it had two networks: the Red, which was the old AT&T network, and the Blue, which was the old RCA network. Radio networks allowed advertisers to direct advertising at a national audience at a lower cost. Network programs allowed local stations to broadcast superior programs that captured a larger listening audience and in return received a share of the fees the national advertiser paid to the network. In 1927 a new network, the Columbia Broadcasting System (CBS) financed by the Paley family began operation and other new networks entered or tried to enter the industry in the 1930s.

Communications developments in the interwar era present something of a mixed picture. By 1920 long distance telephone service was in place, but rising rates slowed the rate of adoption in the period, and telephone use in rural areas declined sharply. Though direct dialing was first tried in the twenties, its general implementation would not come until the postwar era, when other changes, such as microwave transmission of signals and touch-tone dialing, would also appear. Though the number of newspapers declined, newspaper circulation generally held up. The number of competing newspapers in larger cities began declining, a trend that also would accelerate in the postwar American economy.

Banking and Securities Markets

In the twenties commercial banks became “department stores of finance.”— Banks opened up installment (or personal) loan departments, expanded their mortgage lending, opened up trust departments, undertook securities underwriting activities, and offered safe deposit boxes. These changes were a response to growing competition from other financial intermediaries. Businesses, stung by bankers’ control and reduced lending during the 1920-21 depression, began relying more on retained earnings and stock and bond issues to raise investment and, sometimes, working capital. This reduced loan demand. The thrift institutions also experienced good growth in the twenties as they helped fuel the housing construction boom of the decade. The securities markets boomed in the twenties only to see a dramatic crash of the stock market in late 1929.

There were two broad classes of commercial banks; those that were nationally chartered and those that were chartered by the states. Only the national banks were required to be members of the Federal Reserve System. (Figure 21) Most banks were unit banks because national regulators and most state regulators prohibited branching. However, in the twenties a few states began to permit limited branching; California even allowed statewide branching.—The Federal Reserve member banks held the bulk of the assets of all commercial banks, even though most banks were not members. A high bank failure rate in the 1920s has usually been explained by “overbanking” or too many banks located in an area, but H. Thomas Johnson (1973-74) makes a strong argument against this. (Figure 22)— If there were overbanking, on average each bank would have been underutilized resulting in intense competition for deposits and higher costs and lower earnings. One common reason would have been the free entry of banks as long as they achieved the minimum requirements then in force. However, the twenties saw changes that led to the demise of many smaller rural banks that would likely have been profitable if these —changes had not occurred. Improved transportation led to a movement of business activities, including banking, into the larger towns and cities. Rural banks that relied on loans to farmers suffered just as farmers did during the twenties, especially in the first half of the twenties. The number of bank suspensions and the suspension rate fell after 1926. The sharp rise in bank suspensions in 1930 occurred because of the first banking crisis during the Great Depression.

Prior to the twenties, the main assets of commercial banks were short-term business loans, made by creating a demand deposit or increasing an existing one for a borrowing firm. As business lending declined in the 1920s commercial banks vigorously moved into new types of financial activities. As banks purchased more securities for their earning asset portfolios and gained expertise in the securities markets, larger ones established investment departments and by the late twenties were an important force in the underwriting of new securities issued by nonfinancial corporations.

The securities market exhibited perhaps the most dramatic growth of the noncommercial bank financial intermediaries during the twenties, but others also grew rapidly. (Figure 23) The assets of life insurance companies increased by 10 percent a year from 1921 to 1929; by the late twenties they were a very important source of funds for construction investment. Mutual savings banks and savings and loan associations (thrifts) operated in essentially the same types of markets. The Mutual savings banks were concentrated in the northeastern United States. As incomes rose, personal savings increased, and housing construction expanded in the twenties, there was an increasing demand for the thrifts’ interest earning time deposits and mortgage lending.

But the dramatic expansion in the financial sector came in new corporate securities issues in the twenties—especially common and preferred stock—and in the trading of existing shares of those securities. (Figure 24) The late twenties boom in the American economy was rapid, highly visible, and dramatic. Skyscrapers were being erected in most major cities, the automobile manufacturers produced over four and a half million new cars in 1929; and the stock market, like a barometer of this prosperity, was on a dizzying ride to higher and higher prices. “Playing the market” seemed to become a national pastime.

The Dow-Jones index hit its peak of 381 on September 3 and then slid to 320 on October 21. In the following week the stock market “crashed,” with a record number of shares being traded on several days. At the end of Tuesday, October, 29th, the index stood at 230, 96 points less than one week before. On November 13, 1929, the Dow-Jones index reached its lowest point for the year at 198—183 points less than the September 3 peak.

The path of the stock market boom of the twenties can be seen in Figure 25. Sharp price breaks occurred several times during the boom, and each of these gave rise to dark predictions of the end of the bull market and speculation. Until late October of 1929, these predictions turned out to be wrong. Between those price breaks and prior to the October crash, stock prices continued to surge upward. In March of 1928, 3,875,910 shares were traded in one day, establishing a record. By late 1928, five million shares being traded in a day was a common occurrence.

New securities, from rising merger activity and the formation of holding companies, were issued to take advantage of the rising stock prices.—Stock pools, which were not illegal until the 1934 Securities and Exchange Act, took advantage of the boom to temporarily drive up the price of selected stocks and reap large gains for the members of the pool. In stock pools a group of speculators would pool large amounts of their funds and then begin purchasing large amounts of shares of a stock. This increased demand led to rising prices for that stock. Frequently pool insiders would “churn” the stock by repeatedly buying and selling the same shares among themselves, but at rising prices. Outsiders, seeing the price rising, would decide to purchase the stock whose price was rising. At a predetermined higher price the pool members would, within a short period, sell their shares and pull out of the market for that stock. Without the additional demand from the pool, the stock’s price usually fell quickly bringing large losses for the unsuspecting outside investors while reaping large gains for the pool insiders.

Another factor commonly used to explain both the speculative boom and the October crash was the purchase of stocks on small margins. However, contrary to popular perception, margin requirements through most of the twenties were essentially the same as in previous decades. Brokers, recognizing the problems with margin lending in the rapidly changing market, began raising margin requirements in late 1928, and by the fall of 1929, margin requirements were the highest in the history of the New York Stock Exchange. In the 1920s, as was the case for decades prior to that, the usual margin requirements were 10 to 15 percent of the purchase price, and, apparently, more often around 10 percent. There were increases in this percentage by 1928 and by the fall of 1928, well before the crash and at the urging of a special New York Clearinghouse committee, margin requirements had been raised to some of the highest levels in New York Stock Exchange history. One brokerage house required the following of its clients. Securities with a selling price below $10 could only be purchased for cash. Securities with a selling price of $10 to $20 had to have a 50 percent margin; for securities of $20 to $30 a margin requirement of 40 percent; and, for securities with a price above $30 the margin was 30 percent of the purchase price. In the first half of 1929 margin requirements on customers’ accounts averaged a 40 percent margin, and some houses raised their margins to 50 percent a few months before the crash. These were, historically, very high margin requirements. (Smiley and Keehn, 1988)—Even so, during the crash when additional margin calls were issued, those investors who could not provide additional margin saw the brokers’ sell their stock at whatever the market price was at the time and these forced sales helped drive prices even lower.

The crash began on Monday, October 21, as the index of stock prices fell 3 points on the third-largest volume in the history of the New York Stock Exchange. After a slight rally on Tuesday, prices began declining on Wednesday and fell 21 points by the end of the day bringing on the third call for more margin in that week. On Black Thursday, October 24, prices initially fell sharply, but rallied somewhat in the afternoon so that the net loss was only 7 points, but the volume of thirteen million shares set a NYSE record. Friday brought a small gain that was wiped out on Saturday. On Monday, October 28, the Dow Jones index fell 38 points on a volume of nine million shares—three million in the final hour of trading. Black Tuesday, October 29, brought declines in virtually every stock price. Manufacturing firms, which had been lending large sums to brokers for margin loans, had been calling in these loans and this accelerated on Monday and Tuesday. The big Wall Street banks increased their lending on call loans to offset some of this loss of loanable funds. The Dow Jones Index fell 30 points on a record volume of nearly sixteen and a half million shares exchanged. Black Thursday and Black Tuesday wiped out entire fortunes.

Though the worst was over, prices continued to decline until November 13, 1929, as brokers cleaned up their accounts and sold off the stocks of clients who could not supply additional margin. After that, prices began to slowly rise and by April of 1930 had increased 96 points from the low of November 13,— “only” 87 points less than the peak of September 3, 1929. From that point, stock prices resumed their depressing decline until the low point was reached in the summer of 1932.

—There is a long tradition that insists that the Great Bull Market of the late twenties was an orgy of speculation that bid the prices of stocks far above any sustainable or economically justifiable level creating a bubble in the stock market. John Kenneth Galbraith (1954) observed, “The collapse in the stock market in the autumn of 1929 was implicit in the speculation that went before.”—But not everyone has agreed with this.

In 1930 Irving Fisher argued that the stock prices of 1928 and 1929 were based on fundamental expectations that future corporate earnings would be high.— More recently, Murray Rothbard (1963), Gerald Gunderson (1976), and Jude Wanniski (1978) have argued that stock prices were not too high prior to the crash.—Gunderson suggested that prior to 1929, stock prices were where they should have been and that when corporate profits in the summer and fall of 1929 failed to meet expectations, stock prices were written down.— Wanniski argued that political events brought on the crash. The market broke each time news arrived of advances in congressional consideration of the Hawley-Smoot tariff. However, the virtually perfect foresight that Wanniski’s explanation requires is unrealistic.— Charles Kindleberger (1973) and Peter Temin (1976) examined common stock yields and price-earnings ratios and found that the relative constancy did not suggest that stock prices were bid up unrealistically high in the late twenties.—Gary Santoni and Gerald Dwyer (1990) also failed to find evidence of a bubble in stock prices in 1928 and 1929.—Gerald Sirkin (1975) found that the implied growth rates of dividends required to justify stock prices in 1928 and 1929 were quite conservative and lower than post-Second World War dividend growth rates.

However, examination of after-the-fact common stock yields and price-earning ratios can do no more than provide some ex post justification for suggesting that there was not excessive speculation during the Great Bull Market.— Each individual investor was motivated by that person’s subjective expectations of each firm’s future earnings and dividends and the future prices of shares of each firm’s stock. Because of this element of subjectivity, not only can we never accurately know those values, but also we can never know how they varied among individuals. The market price we observe will be the end result of all of the actions of the market participants, and the observed price may be different from the price almost all of the participants expected.

In fact, there are some indications that there were differences in 1928 and 1929. Yields on common stocks were somewhat lower in 1928 and 1929. In October of 1928, brokers generally began raising margin requirements, and by the beginning of the fall of 1929, margin requirements were, on average, the highest in the history of the New York Stock Exchange. Though the discount and commercial paper rates had moved closely with the call and time rates on brokers’ loans through 1927, the rates on brokers’ loans increased much more sharply in 1928 and 1929.— This pulled in funds from corporations, private investors, and foreign banks as New York City banks sharply reduced their lending. These facts suggest that brokers and New York City bankers may have come to believe that stock prices had been bid above a sustainable level by late 1928 and early 1929. White (1990) created a quarterly index of dividends for firms in the Dow-Jones index and related this to the DJI. Through 1927 the two track closely, but in 1928 and 1929 the index of stock prices grows much more rapidly than the index of dividends.

The qualitative evidence for a bubble in the stock market in 1928 and 1929 that White assembled was strengthened by the findings of J. Bradford De Long and Andre Shleifer (1991). They examined closed-end mutual funds, a type of fund where investors wishing to liquidate must sell their shares to other individual investors allowing its fundamental value to be exactly measurable.— Using evidence from these funds, De Long and Shleifer estimated that in the summer of 1929, the Standard and Poor’s composite stock price index was overvalued about 30 percent due to excessive investor optimism. Rappoport and White (1993 and 1994) found other evidence that supported a bubble in the stock market in 1928 and 1929. There was a sharp divergence between the growth of stock prices and dividends; there were increasing premiums on call and time brokers’ loans in 1928 and 1929; margin requirements rose; and stock market volatility rose in the wake of the 1929 stock market crash.

There are several reasons for the creation of such a bubble. First, the fundamental values of earnings and dividends become difficult to assess when there are major industrial changes, such as the rapid changes in the automobile industry, the new electric utilities, and the new radio industry.— Eugene White (1990) suggests that “While investors had every reason to expect earnings to grow, they lacked the means to evaluate easily the future path of dividends.” As a result investors bid up prices as they were swept up in the ongoing stock market boom. Second, participation in the stock market widened noticeably in the twenties. The new investors were relatively unsophisticated, and they were more likely to be caught up in the euphoria of the boom and bid prices upward.— New, inexperienced commission sales personnel were hired to sell stocks and they promised glowing returns on stocks they knew little about.

These observations were strengthened by the experimental work of economist Vernon Smith. (Bishop, 1987) In a number of experiments over a three-year period using students and Tucson businessmen and businesswomen, bubbles developed as inexperienced investors valued stocks differently and engaged in price speculation. As these investors in the experiments began to realize that speculative profits were unsustainable and uncertain, their dividend expectations changed, the market crashed, and ultimately stocks began trading at their fundamental dividend values. These bubbles and crashes occurred repeatedly, leading Smith to conjecture that there are few regulatory steps that can be taken to prevent a crash.

Though the bubble of 1928 and 1929 made some downward adjustment in stock prices inevitable, as Barsky and De Long have shown, changes in fundamentals govern the overall movements. And the end of the long bull market was almost certainly governed by this. In late 1928 and early 1929 there was a striking rise in economic activity, but a decline began somewhere between May and July of that year and was clearly evident by August of 1929. By the middle of August, the rise in stock prices had slowed down as better information on the contraction was received. There were repeated statements by leading figures that stocks were “overpriced” and the Federal Reserve System sharply increased the discount rate in August 1929 was well as continuing its call for banks to reduce their margin lending. As this information was assessed, the number of speculators selling stocks increased, and the number buying decreased. With the decreased demand, stock prices began to fall, and as more accurate information on the nature and extent of the decline was received, stock prices fell more. The late October crash made the decline occur much more rapidly, and the margin purchases and consequent forced selling of many of those stocks contributed to a more severe price fall. The recovery of stock prices from November 13 into April of 1930 suggests that stock prices may have been driven somewhat too low during the crash.

There is now widespread agreement that the 1929 stock market crash did not cause the Great Depression. Instead, the initial downturn in economic activity was a primary determinant of the ending of the 1928-29 stock market bubble. The stock market crash did make the downturn become more severe beginning in November 1929. It reduced discretionary consumption spending (Romer, 1990) and created greater income uncertainty helping to bring on the contraction (Flacco and Parker, 1992). Though stock market prices reached a bottom and began to recover following November 13, 1929, the continuing decline in economic activity took its toll and by May 1930 stock prices resumed their decline and continued to fall through the summer of 1932.

Domestic Trade

In the nineteenth century, a complex array of wholesalers, jobbers, and retailers had developed, but changes in the postbellum period reduced the role of the wholesalers and jobbers and strengthened the importance of the retailers in domestic trade. (Cochran, 1977; Chandler, 1977; Marburg, 1951; Clewett, 1951) The appearance of the department store in the major cities and the rise of mail order firms in the postbellum period changed the retailing market.

Department Stores

A department store is a combination of specialty stores organized as departments within one general store. A. T. Stewart’s huge 1846 dry goods store in New York City is often referred to as the first department store. (Resseguie, 1965; Sobel-Sicilia, 1986) R. H. Macy started his dry goods store in 1858 and Wanamaker’s in Philadelphia opened in 1876. By the end of the nineteenth century, every city of any size had at least one major department store. (Appel, 1930; Benson, 1986; Hendrickson, 1979; Hower, 1946; Sobel, 1974) Until the late twenties, the department store field was dominated by independent stores, though some department stores in the largest cities had opened a few suburban branches and stores in other cities. In the interwar period department stores accounted for about 8 percent of retail sales.

The department stores relied on a “one-price” policy, which Stewart is credited with beginning. In the antebellum period and into the postbellum period, it was common not to post a specific price on an item; rather, each purchaser haggled with a sales clerk over what the price would be. Stewart posted fixed prices on the various dry goods sold, and the customer could either decide to buy or not buy at the fixed price. The policy dramatically lowered transactions costs for both the retailer and the purchaser. Prices were reduced with a smaller markup over the wholesale price, and a large sales volume and a quicker turnover of the store’s inventory generated profits.

Mail Order Firms

What changed the department store field in the twenties was the entrance of Sears Roebuck and Montgomery Ward, the two dominant mail order firms in the United States. (Emmet-Jeuck, 1950; Chandler, 1962, 1977) Both firms had begun in the late nineteenth century and by 1914 the younger Sears Roebuck had surpassed Montgomery Ward. Both located in Chicago due to its central location in the nation’s rail network and both had benefited from the advent of Rural Free Delivery in 1896 and low cost Parcel Post Service in 1912.

In 1924 Sears hired Robert C. Wood, who was able to convince Sears Roebuck to open retail stores. Wood believed that the declining rural population and the growing urban population forecast the gradual demise of the mail order business; survival of the mail order firms required a move into retail sales. By 1925 Sears Roebuck had opened 8 retail stores, and by 1929 it had 324 stores. Montgomery Ward quickly followed suit. Rather than locating these in the central business district (CBD), Wood located many on major streets closer to the residential areas. These moves of Sears Roebuck and Montgomery Ward expanded department store retailing and provided a new type of chain store.

Chain Stores

Though chain stores grew rapidly in the first two decades of the twentieth century, they date back to the 1860s when George F. Gilman and George Huntington Hartford opened a string of New York City A&P (Atlantic and Pacific) stores exclusively to sell tea. (Beckman-Nolen, 1938; Lebhar, 1963; Bullock, 1933) Stores were opened in other regions and in 1912 their first “cash-and-carry” full-range grocery was opened. Soon they were opening 50 of these stores each week and by the 1920s A&P had 14,000 stores. They then phased out the small stores to reduce the chain to 4,000 full-range, supermarket-type stores. A&P’s success led to new grocery store chains such as Kroger, Jewel Tea, and Safeway.

Prior to A&P’s cash-and-carry policy, it was common for grocery stores, produce (or green) grocers, and meat markets to provide home delivery and credit, both of which were costly. As a result, retail prices were generally marked up well above the wholesale prices. In cash-and-carry stores, items were sold only for cash; no credit was extended, and no expensive home deliveries were provided. Markups on prices could be much lower because other costs were much lower. Consumers liked the lower prices and were willing to pay cash and carry their groceries, and the policy became common by the twenties.

Chains also developed in other retail product lines. In 1879 Frank W. Woolworth developed a “5 and 10 Cent Store,” or dime store, and there were over 1,000 F. W. Woolworth stores by the mid-1920s. (Winkler, 1940) Other firms such as Kresge, Kress, and McCrory successfully imitated Woolworth’s dime store chain. J.C. Penney’s dry goods chain store began in 1901 (Beasley, 1948), Walgreen’s drug store chain began in 1909, and shoes, jewelry, cigars, and other lines of merchandise also began to be sold through chain stores.

Self-Service Policies

In 1916 Clarence Saunders, a grocer in Memphis, Tennessee, built upon the one-price policy and began offering self-service at his Piggly Wiggly store. Previously, customers handed a clerk a list or asked for the items desired, which the clerk then collected and the customer paid for. With self-service, items for sale were placed on open shelves among which the customers could walk, carrying a shopping bag or pushing a shopping cart. Each customer could then browse as he or she pleased, picking out whatever was desired. Saunders and other retailers who adopted the self-service method of retail selling found that customers often purchased more because of exposure to the array of products on the shelves; as well, self-service lowered the labor required for retail sales and therefore lowered costs.

Shopping Centers

Shopping Centers, another innovation in retailing that began in the twenties, was not destined to become a major force in retail development until after the Second World War. The ultimate cause of this innovation was the widening ownership and use of the automobile. By the 1920s, as the ownership and use of the car began expanding, population began to move out of the crowded central cities toward the more open suburbs. When General Robert Wood set Sears off on its development of urban stores, he located these not in the central business district, CBD, but as free-standing stores on major arteries away from the CBD with sufficient space for parking.

At about the same time, a few entrepreneurs began to develop shopping centers. Yehoshua Cohen (1972) says, “The owner of such a center was responsible for maintenance of the center, its parking lot, as well as other services to consumers and retailers in the center.” Perhaps the earliest such shopping center was the Country Club Plaza built in 1922 by the J. C. Nichols Company in Kansas City, Missouri. Other early shopping centers appeared in Baltimore and Dallas. By the mid-1930s the concept of a planned shopping center was well known and was expected to be the means to capture the trade of the growing number of suburban consumers.

International Trade and Finance

In the twenties a gold exchange standard was developed to replace the gold standard of the prewar world. Under a gold standard, each country’s currency carried a fixed exchange rate with gold, and the currency had to be backed up by gold. As a result, all countries on the gold standard had fixed exchange rates with all other countries. Adjustments to balance international trade flows were made by gold flows. If a country had a deficit in its trade balance, gold would leave the country, forcing the money stock to decline and prices to fall. Falling prices made the deficit countries’ exports more attractive and imports more costly, reducing the deficit. Countries with a surplus imported gold, which increased the money stock and caused prices to rise. This made the surplus countries’ exports less attractive and imports more attractive, decreasing the surplus. Most economists who have studied the prewar gold standard contend that it did not work as the conventional textbook model says, because capital flows frequently reduced or eliminated the need for gold flows for long periods of time. However, there is no consensus on whether fortuitous circumstances, rather than the gold standard, saved the international economy from periodic convulsions or whether the gold standard as it did work was sufficient to promote stability and growth in international transactions.

After the First World War it was argued that there was a “shortage” of fluid monetary gold to use for the gold standard, so some method of “economizing” on gold had to be found. To do this, two basic changes were made. First, most nations, other than the United States, stopped domestic circulation of gold. Second, the “gold exchange” system was created. Most countries held their international reserves in the form of U.S. dollars or British pounds and international transactions used dollars or pounds, as long as the United States and Great Britain stood ready to exchange their currencies for gold at fixed exchange rates. However, the overvaluation of the pound and the undervaluation of the franc threatened these arrangements. The British trade deficit led to a capital outflow, higher interest rates, and a weak economy. In the late twenties, the French trade surplus led to the importation of gold that they did not allow to expand the money supply.

Economizing on gold by no longer allowing its domestic circulation and by using key currencies as international monetary reserves was really an attempt to place the domestic economies under the control of the nations’ politicians and make them independent of international events. Unfortunately, in doing this politicians eliminated the equilibrating mechanism of the gold standard but had nothing with which to replace it. The new international monetary arrangements of the twenties were potentially destabilizing because they were not allowed to operate as a price mechanism promoting equilibrating adjustments.

There were other problems with international economic activity in the twenties. Because of the war, the United States was abruptly transformed from a debtor to a creditor on international accounts. Though the United States did not want reparations payments from Germany, it did insist that Allied governments repay American loans. The Allied governments then insisted on war reparations from Germany. These initial reparations assessments were quite large. The Allied Reparations Commission collected the charges by supervising Germany’s foreign trade and by internal controls on the German economy, and it was authorized to increase the reparations if it was felt that Germany could pay more. The treaty allowed France to occupy the Ruhr after Germany defaulted in 1923.

Ultimately, this tangled web of debts and reparations, which was a major factor in the course of international trade, depended upon two principal actions. First, the United States had to run an import surplus or, on net, export capital out of the United States to provide a pool of dollars overseas. Germany then had either to have an export surplus or else import American capital so as to build up dollar reserves—that is, the dollars the United States was exporting. In effect, these dollars were paid by Germany to Great Britain, France, and other countries that then shipped them back to the United States as payment on their U.S. debts. If these conditions did not occur, (and note that the “new” gold standard of the twenties had lost its flexibility because the price adjustment mechanism had been eliminated) disruption in international activity could easily occur and be transmitted to the domestic economies.

In the wake of the 1920-21 depression Congress passed the Emergency Tariff Act, which raised tariffs, particularly on manufactured goods. (Figures 26 and 27) The Fordney-McCumber Tariff of 1922 continued the Emergency Tariff of 1921, and its protection on many items was extremely high, ranging from 60 to 100 percent ad valorem (or as a percent of the price of the item). The increases in the Fordney-McCumber tariff were as large and sometimes larger than the more famous (or “infamous”) Smoot-Hawley tariff of 1930. As farm product prices fell at the end of the decade presidential candidate Herbert Hoover proposed, as part of his platform, tariff increases and other changes to aid the farmers. In January 1929, after Hoover’s election, but before he took office, a tariff bill was introduced into Congress. Special interests succeeded in gaining additional (or new) protection for most domestically produced commodities and the goal of greater protection for the farmers tended to get lost in the increased protection for multitudes of American manufactured products. In spite of widespread condemnation by economists, President Hoover signed the Smoot-Hawley Tariff in June 1930 and rates rose sharply.

Following the First World War, the U.S. government actively promoted American exports, and in each of the postwar years through 1929, the United States recorded a surplus in its balance of trade. (Figure 28) However, the surplus declined in the 1930s as both exports and imports fell sharply after 1929. From the mid-1920s on finished manufactures were the most important exports, while agricultural products dominated American imports.

The majority of the funds that allowed Germany to make its reparations payments to France and Great Britain and hence allowed those countries to pay their debts to the United States came from the net flow of capital out of the United States in the form of direct investment in real assets and investments in long- and short-term foreign financial assets. After the devastating German hyperinflation of 1922 and 1923, the Dawes Plan reformed the German economy and currency and accelerated the U.S. capital outflow. American investors began to actively and aggressively pursue foreign investments, particularly loans (Lewis, 1938) and in the late twenties there was a marked deterioration in the quality of foreign bonds sold in the United States. (Mintz, 1951)

The system, then, worked well as long as there was a net outflow of American capital, but this did not continue. In the middle of 1928, the flow of short-term capital began to decline. In 1928 the flow of “other long-term” capital out of the United States was 752 million dollars, but in 1929 it was only 34 million dollars. Though arguments now exist as to whether the booming stock market in the United States was to blame for this, it had far-reaching effects on the international economic system and the various domestic economies.

The Start of the Depression

The United States had the majority of the world’s monetary gold, about 40 percent, by 1920. In the latter part of the twenties, France also began accumulating gold as its share of the world’s monetary gold rose from 9 percent in 1927 to 17 percent in 1929 and 22 percent by 1931. In 1927 the Federal Reserve System had reduced discount rates (the interest rate at which they lent reserves to member commercial banks) and engaged in open market purchases (purchasing U.S. government securities on the open market to increase the reserves of the banking system) to push down interest rates and assist Great Britain in staying on the gold standard. By early 1928 the Federal Reserve System was worried about its loss of gold due to this policy as well as the ongoing boom in the stock market. It began to raise the discount rate to stop these outflows. Gold was also entering the United States so that foreigners could obtain dollars to invest in stocks and bonds. As the United States and France accumulated more and more of the world’s monetary gold, other countries’ central banks took contractionary steps to stem the loss of gold. In country after country these deflationary strategies began contracting economic activity and by 1928 some countries in Europe, Asia, and South America had entered into a depression. More countries’ economies began to decline in 1929, including the United States, and by 1930 a depression was in force for almost all of the world’s market economies. (Temin, 1989; Eichengreen, 1992)

Monetary and Fiscal Policies in the 1920s

Fiscal Policies

As a tool to promote stability in aggregate economic activity, fiscal policy is largely a post-Second World War phenomenon. Prior to 1930 the federal government’s spending and taxing decisions were largely, but not completely, based on the perceived “need” for government-provided public goods and services.

Though the fiscal policy concept had not been developed, this does not mean that during the twenties no concept of the government’s role in stimulating economic activity existed. Herbert Stein (1990) points out that in the twenties Herbert Hoover and some of his contemporaries shared two ideas about the proper role of the federal government. The first was that federal spending on public works could be an important force in reducininvestment. Both concepts fit the ideas held by Hoover and others of his persuasion that the U.S. economy of the twenties was not the result of laissez-faire workings but of “deliberate social engineering.”

The federal personal income tax was enacted in 1913. Though mildly progressive, its rates were low and topped out at 7 percent on taxable income in excess of $750,000. (Table 4) As the United States prepared for war in 1916, rates were increased and reached a maximum marginal rate of 12 percent. With the onset of the First World War, the rates were dramatically increased. To obtain additional revenue in 1918, marginal rates were again increased. The share of federal revenue generated by income taxes rose from 11 percent in 1914 to 69 percent in 1920. The tax rates had been extended downward so that more than 30 percent of the nation’s income recipients were subject to income taxes by 1918. However, through the purchase of tax exempt state and local securities and through steps taken by corporations to avoid the cash distribution of profits, the number of high income taxpayers and their share of total taxes paid declined as Congress kept increasing the tax rates. The normal (or base) tax rate was reduced slightly for 1919 but the surtax rates, which made the income tax highly progressive, were retained. (Smiley-Keehn, 1995)

President Harding’s new Secretary of the Treasury, Andrew Mellon, proposed cutting the tax rates, arguing that the rates in the higher brackets had “passed the point of productivity” and rates in excess of 70 percent simply could not be collected. Though most agreed that the rates were too high, there was sharp disagreement on how the rates should be cut. Democrats and Progressive Republicans argued for rate cuts targeted for the lower income taxpayers while maintaining most of the steep progressivity of the tax rates. They believed that remedies could be found to change the tax laws to stop the legal avoidance of federal income taxes. Republicans argued for sharper cuts that reduced the progressivity of the rates. Mellon proposed a maximum rate of 25 percent.

Though the federal income tax rates were reduced and made less progressive, it took three tax rate cuts in 1921, 1924, and 1925 before Mellon’s goal was finally achieved. The highest marginal tax rate was reduced from 73 percent to 58 percent to 46 percent and finally to 25 percent for the 1925 tax year. All of the other rates were also reduced and exemptions increased. By 1926, only about the top 10 percent of income recipients were subject to federal income taxes. As tax rates were reduced, the number of high income tax returns increased and the share of total federal personal income taxes paid rose. (Tables 5 and 6) Even with the dramatic income tax rate cuts and reductions in the number of low income taxpayers, federal personal income tax revenue continued to rise during the 1920s. Though early estimates of the distribution of personal income showed sharp increases in income inequality during the 1920s (Kuznets, 1953; Holt, 1977), more recent estimates have found that the increases in inequality were considerably less and these appear largely to be related to the sharp rise in capital gains due to the booming stock market in the late twenties. (Smiley, 1998 and 2000)

Each year in the twenties the federal government generated a surplus, in some years as much as 1 percent of GNP. The surpluses were used to reduce the federal deficit and it declined by 25 percent between 1920 and 1930. Contrary to simple macroeconomic models that argue a federal government budget surplus must be contractionary and tend to stop an economy from reaching full employment, the American economy operated at full-employment or close to it throughout the twenties and saw significant economic growth. In this case, the surpluses were not contractionary because the dollars were circulated back into the economy through the purchase of outstanding federal debt rather than pulled out as currency and held in a vault somewhere.

Monetary Policies

In 1913 fear of the “money trust” and their monopoly power led Congress to create 12 central banks when they created the Federal Reserve System. The new central banks were to control money and credit and act as lenders of last resort to end banking panics. The role of the Federal Reserve Board, located in Washington, D.C., was to coordinate the policies of the 12 district banks; it was composed of five presidential appointees and the current secretary of the treasury and comptroller of the currency. All national banks had to become members of the Federal Reserve System, the Fed, and any state bank meeting the qualifications could elect to do so.

The act specified fixed reserve requirements on demand and time deposits, all of which had to be on deposit in the district bank. Commercial banks were allowed to rediscount commercial paper and given Federal Reserve currency. Initially, each district bank set its own rediscount rate. To provide additional income when there was little rediscounting, the district banks were allowed to engage in open market operations that involved the purchasing and selling of federal government securities, short-term securities of state and local governments issued in anticipation of taxes, foreign exchange, and domestic bills of exchange. The district banks were also designated to act as fiscal agents for the federal government. Finally, the Federal Reserve System provided a central check clearinghouse for the entire banking system.

When the Federal Reserve System was originally set up, it was believed that its primary role was to be a lender of last resort to prevent banking panics and become a check-clearing mechanism for the nation’s banks. Both the Federal Reserve Board and the Governors of the District Banks were bodies established to jointly exercise these activities. The division of functions was not clear, and a struggle for power ensued, mainly between the New York Federal Reserve Bank, which was led by J. P. Morgan’s protege, Benjamin Strong, through 1928, and the Federal Reserve Board. By the thirties the Federal Reserve Board had achieved dominance.

There were really two conflicting criteria upon which monetary actions were ostensibly based: the Gold Standard and the Real Bills Doctrine. The Gold Standard was supposed to be quasi-automatic, with an effective limit to the quantity of money. However, the Real Bills Doctrine (which required that all loans be made on short-term, self-liquidating commercial paper) had no effective limit on the quantity of money. The rediscounting of eligible commercial paper was supposed to lead to the required “elasticity” of the stock of money to “accommodate” the needs of industry and business. Actually the rediscounting of commercial paper, open market purchases, and gold inflows all had the same effects on the money stock.

The 1920-21 Depression

During the First World War, the Fed kept discount rates low and granted discounts on banks’ customer loans used to purchase V-bonds in order to help finance the war. The final Victory Loan had not been floated when the Armistice was signed in November of 1918: in fact, it took until October of 1919 for the government to fully sell this last loan issue. The Treasury, with the secretary of the treasury sitting on the Federal Reserve Board, persuaded the Federal Reserve System to maintain low interest rates and discount the Victory bonds necessary to keep bond prices high until this last issue had been floated. As a result, during this period the money supply grew rapidly and prices rose sharply.

A shift from a federal deficit to a surplus and supply disruptions due to steel and coal strikes in 1919 and a railroad strike in early 1920 contributed to the end of the boom. But the most—common view is that the Fed’s monetary policy was the main determinant of the end of the expansion and inflation and the beginning of the subsequent contraction and severe deflation. When the Fed was released from its informal agreement with the Treasury in November of 1919, it raised the discount rate from 4 to 4.75 percent. Benjamin Strong (the governor of the New York bank) was beginning to believe that the time for strong action was past and that the Federal Reserve System’s actions should be moderate. However, with Strong out of the country, the Federal Reserve Board increased the discount rate from 4.75 to 6 percent in late January of 1920 and to 7 percent on June 1, 1920. By the middle of 1920, economic activity and employment were rapidly falling, and prices had begun their downward spiral in one of the sharpest price declines in American history. The Federal Reserve System kept the discount rate at 7 percent until May 5, 1921, when it was lowered to 6.5 percent. By June of 1922, the rate had been lowered yet again to 4 percent. (Friedman and Schwartz, 1963)

The Federal Reserve System authorities received considerable criticism then and later for their actions. Milton Friedman and Anna Schwartz (1963) contend that the discount rate was raised too much too late and then kept too high for too long, causing the decline to be more severe and the price deflation to be greater. In their opinion the Fed acted in this manner due to the necessity of meeting the legal reserve requirement with a safe margin of gold reserves. Elmus Wicker (1966), however, argues that the gold reserve ratio was not the main factor determining the Federal Reserve policy in the episode. Rather, the Fed knowingly pursued a deflationary policy because it felt that the money supply was simply too large and prices too high. To return to the prewar parity for gold required lowering the price level, and there was an excessive stock of money because the additional money had been used to finance the war, not to produce consumer goods. Finally, the outstanding indebtedness was too large due to the creation of Fed credit.

Whether statutory gold reserve requirements to maintain the gold standard or domestic credit conditions were the most important determinant of Fed policy is still an open question, though both certainly had some influence. Regardless of the answer to that question, the Federal Reserve System’s first major undertaking in the years immediately following the First World War demonstrated poor policy formulation.

Federal Reserve Policies from 1922 to 1930

By 1921 the district banks began to recognize that their open market purchases had effects on interest rates, the money stock, and economic activity. For the next several years, economists in the Federal Reserve System discussed how this worked and how it could be related to discounting by member banks. A committee was created to coordinate the open market purchases of the district banks.

The recovery from the 1920-1921 depression had proceeded smoothly with moderate price increases. In early 1923 the Fed sold some securities and increased the discount rate from 4 percent as they believed the recovery was too rapid. However, by the fall of 1923 there were some signs of a business slump. McMillin and Parker (1994) argue that this contraction, as well as the 1927 contraction, were related to oil price shocks. By October of 1923 Benjamin Strong was advocating securities purchases to counter this. Between then and September 1924 the Federal Reserve System increased its securities holdings by over $500 million. Between April and August of 1924 the Fed reduced the discount rate to 3 percent in a series of three separate steps. In addition to moderating the mild business slump, the expansionary policy was also intended to reduce American interest rates relative to British interest rates. This reversed the gold flow back toward Great Britain allowing Britain to return to the gold standard in 1925. At the time it appeared that the Fed’s monetary policy had successfully accomplished its goals.

By the summer of 1924 the business slump was over and the economy again began to grow rapidly. By the mid-1920s real estate speculation had arisen in many urban areas in the United States and especially in Southeastern Florida. Land prices were rising sharply. Stock market prices had also begun rising more rapidly. The Fed expressed some worry about these developments and in 1926 sold some securities to gently slow the real estate and stock market boom. Amid hurricanes and supply bottlenecks the Florida real estate boom collapsed but the stock market boom continued.

The American economy entered into another mild business recession in the fall of 1926 that lasted until the fall of 1927. One of the factors in this was Henry’s Ford’s shut down of all of his factories to changeover from the Model T to the Model A. His employees were left without a job and without income for over six months. International concerns also reappeared. France, which was preparing to return to the gold standard, had begun accumulating gold and gold continued to flow into the United States. Some of this gold came from Great Britain making it difficult for the British to remain on the gold standard. This occasioned a new experiment in central bank cooperation. In July 1927 Benjamin Strong arranged a conference with Governor Montagu Norman of the Bank of England, Governor Hjalmar Schacht of the Reichsbank, and Deputy Governor Charles Ritt of the Bank of France in an attempt to promote cooperation among the world’s central bankers. By the time the conference began the Fed had already taken steps to counteract the business slump and reduce the gold inflow. In early 1927 the Fed reduced discount rates and made large securities purchases. One result of this was that the gold stock fell from $4.3 billion in mid-1927 to $3.8 billion in mid-1928. Some of the gold exports went to France and France returned to the gold standard with its undervalued currency. The loss of gold from Britain eased allowing it to maintain the gold standard.

By early 1928 the Fed was again becoming worried. Stock market prices were rising even faster and the apparent speculative bubble in the stock market was of some concern to Fed authorities. The Fed was also concerned about the loss of gold and wanted to bring that to an end. To do this they sold securities and, in three steps, raised the discount rate to 5 percent by July 1928. To this point the Federal Reserve Board had largely agreed with district Bank policy changes. However, problems began to develop.

During the stock market boom of the late 1920s the Federal Reserve Board preferred to use “moral suasion” rather than increases in discount rates to lessen member bank borrowing. The New York City bank insisted that moral suasion would not work unless backed up by literal credit rationing on a bank by bank basis which they, and the other district banks, were unwilling to do. They insisted that discount rates had to be increased. The Federal Reserve Board countered that this general policy change would slow down economic activity in general rather than be specifically targeted to stock market speculation. The result was that little was done for a year. Rates were not raised but no open market purchases were undertaken. Rates were finally raised to 6 percent in August of 1929. By that time the contraction had already begun. In late October the stock market crashed, and America slid into the Great Depression.

In November, following the stock market crash the Fed reduced discount rates to 4.5 percent. In January they again decreased discount rates and began a series of discount rate decreases until the rate reached 2.5 percent at the end of 1930. No further open market operations were undertaken for the next six months. As banks reduced their discounting in 1930, the stock of money declined. There was a banking crisis in the southeast in November and December of 1930, and in its wake the public’s holding of currency relative to deposits and banks’ reserve ratios began to rise and continued to do so through the end of the Great Depression.

Conclusion

Though some disagree, there is growing evidence that the behavior of the American economy in the 1920s did not cause the Great Depression. The depressed 1930s were not “retribution” for the exuberant growth of the 1920s. The weakness of a few economic sectors in the 1920s did not forecast the contraction from 1929 to 1933. Rather it was the depression of the 1930s and the Second World War that interrupted the economic growth begun in the 1920s and resumed after the Second World War. Just as the construction of skyscrapers that began in the 1920s resumed in the 1950s, so did real economic growth and progress resume. In retrospect we can see that the introduction and expansion of new technologies and industries in the 1920s, such as autos, household electric appliances, radio, and electric utilities, are echoed in the 1990s in the effects of the expanding use and development of the personal computer and the rise of the internet. The 1920s have much to teach us about the growth and development of the American economy.

Bullock, Roy J. “The Early History of the Great Atlantic & Pacific Tea Company,” Harvard Business Review 11 (1933): 289-93.

Bullock, Roy J. “A History of the Great Atlantic & Pacific Tea Company Since 1878.” Harvard Business Review 12 (1933): 59-69.

Cecchetti, Stephen G. “Understanding the Great Depression: Lessons for Current Policy.” In The Economics of the Great Depression, Edited by Mark Wheeler. Kalamazoo, MI: W. E. Upjohn Institute for Employment Research, 1998.

Chandler, Alfred D., Jr. Strategy and Structure: Chapters in the History of the American Industrial Enterprise. Cambridge, MA: The M.I.T. Press, 1962.

Cochran, Thomas C. 200 Years of American Business. New York: Delta Books, 1977.

Cohen, Yehoshua S. Diffusion of an Innovation in an Urban System: The Spread of Planned Regional Shopping Centers in the United States, 1949-1968. Chicago: The University of Chicago, Department of Geography, Research Paper No. 140, 1972.

Daley, Robert. An American Saga: Juan Trippe and His Pan American Empire. New York: Random House, 1980.

Clarke, Sally. “New Deal Regulation and the Revolution in American Farm Productivity: A Case Study of the Diffusion of the Tractor in the Corn Belt, 1920-1940.” The Journal of Economic History 51, no. 1 (1991): 105-115.

Cohen, Avi. “Technological Change as Historical Process: The Case of the U.S. Pulp and Paper Industry, 1915-1940.” The Journal of Economic History 44 (1984): 775-79.

Davies, R. E. G. A History of the World’s Airlines. London: Oxford University Press, 1964.

Parker, Randall and Paul Flacco. “Income Uncertainty and the Onset of the Great Depression.” Economic Inquiry 30 (1992): 154-171.

Parker, William N. “Agriculture.” In American Economic Growth: An Economist’s History of the United States, edited by Lance E. Davis, Richard A. Easterlin, William N. Parker, et. al. New York: Harper and Row, 1972.

Smiley, Gene. “New Estimates of Income Shares During the 1920s.” In Calvin Coolidge and the Coolidge Era: Essays on the History of the 1920s, edited by John Earl Haynes, 215-232. Washington, D.C.: Library of Congress, 1998.

Smiley, Gene. “A Note on New Estimates of the Distribution of Income in the 1920s.” The Journal of Economic History 60, no. 4 (2000): 1120-1128.

Smiley, Gene. Rethinking the Great Depression: A New View of Its Causes and Consequences. Chicago: Ivan R. Dee, 2002.

Smiley, Gene, and Richard H. Keehn. “Margin Purchases, Brokers’ Loans and the Bull Market of the Twenties.” Business and Economic History. 2d series. 17 (1988): 129-42.

Donald J. Harreld, Brigham Young University

In just over one hundred years, the provinces of the Northern Netherlands went from relative obscurity as the poor cousins of the industrious and heavily urbanized Southern Netherlands provinces of Flanders and Brabant to the pinnacle of European commercial success. Taking advantage of a favorable agricultural base, the Dutch achieved success in the fishing industry and the Baltic and North Sea carrying trade during the fifteenth and sixteenth centuries before establishing a far-flung maritime empire in the seventeenth century.

The Economy of the Netherlands up to the Sixteenth Century

In many respects the seventeenth-century Dutch Republic inherited the economic successes of the Burgundian and Habsburg Netherlands. For centuries, Flanders and to a lesser extent Brabant had been at the forefront of the medieval European economy. An indigenous cloth industry was present throughout all areas of Europe in the early medieval period, but Flanders was the first to develop the industry with great intensity. A tradition of cloth manufacture in the Low Countries existed from antiquity when the Celts and then the Franks continued an active textile industry learned from the Romans.

As demand grew early textile production moved from its rural origins to the cities and had become, by the twelfth century, an essentially urban industry. Native wool could not keep up with demand, and the Flemings imported English wool in great quantities. The resulting high quality product was much in demand all over Europe, from Novgorod to the Mediterranean. Brabant also rose to an important position in textile industry, but only about a century after Flanders. By the thirteenth century the number of people engaged in some aspect of the textile industry in the Southern Netherlands had become more than the total engaged in all other crafts. It is possible that this emphasis on cloth manufacture was the reason that the Flemish towns ignored the emerging maritime shipping industry which was eventually dominated by others, first the German Hanseatic League, and later Holland and Zeeland.

By the end of the fifteenth century Antwerp in Brabant had become the commercial capital of the Low Countries as foreign merchants went to the city in great numbers in search of the high-value products offered at the city’s fairs. But the traditional cloths manufactured in Flanders had lost their allure for most European markets, particularly as the English began exporting high quality cloths rather than the raw materials the Flemish textile industry depended on. Many textile producers turned to the lighter weight and cheaper “new draperies.” Despite protectionist measures instituted in the mid-fifteenth century, English cloth found an outlet in Antwerp ‘s burgeoning markets. By the early years of the sixteenth century the Portuguese began using Antwerp as an outlet for their Asian pepper and spice imports, and the Germans continued to bring their metal products (copper and silver) there. For almost a hundred years Antwerp remained the commercial capital of northern Europe, until the religious and political events of the 1560s and 1570s intervened and the Dutch Revolt against Spanish rule toppled the commercial dominance of Antwerp and the southern provinces. Within just a few years of the Fall of Antwerp (1585), scores of merchants and mostly Calvinist craftsmen fled the south for the relative security of the Northern Netherlands.

The exodus from the south certainly added to the already growing population of the north. However, much like Flanders and Brabant, the northern provinces of Holland and Zeeland were already populous and heavily urbanized. The population of these maritime provinces had been steadily growing throughout the sixteenth century, perhaps tripling between the first years of the sixteenth century to about 1650. The inland provinces grew much more slowly during the same period. Not until the eighteenth century, when the Netherlands as a whole faced declining fortunes would the inland provinces begin to match the growth of the coastal core of the country.

Dutch Agriculture

During the fifteenth century, and most of the sixteenth century, the Northern Netherlands provinces were predominantly rural compared to the urbanized southern provinces. Agriculture and fishing formed the basis for the Dutch economy in the fifteenth and sixteenth centuries. One of the characteristics of Dutch agriculture during this period was its emphasis on intensive animal husbandry. Dutch cattle were exceptionally well cared for and dairy produce formed a significant segment of the agricultural sector. During the seventeenth century, as the Dutch urban population saw dramatic growth many farmers also turned to market gardening to supply the cities with vegetables.

Some of the impetus for animal production came from the trade in slaughter cattle from Denmark and Northern Germany. Holland was an ideal area for cattle feeding and fattening before eventual slaughter and export to the cities of the Southern provinces. The trade in slaughter cattle expanded from about 1500 to 1660, but protectionist measures on the part of Dutch authorities who wanted to encourage the fattening of home-bred cattle ensured a contraction of the international cattle trade between 1660 and 1750.

Although agriculture made up the largest segment of the Dutch economy, cereal production in the Netherlands could not keep up with demand particularly by the seventeenth century as migration from the southern provinces contributed to population increases. The provinces of the Low Countries traditionally had depended on imported grain from the south (France and the Walloon provinces) and when crop failures interrupted the flow of grain from the south, the Dutch began to import grain from the Baltic. Baltic grain imports experienced sustained growth from about the middle of the sixteenth century to roughly 1650 when depression and stagnation characterized the grain trade into the eighteenth century.

Indeed, the Baltic grain trade (see below), a major source of employment for the Dutch, not only in maritime transport but in handling and storage as well, was characterized as the “mother trade.” In her recent book on the Baltic grain trade, Mijla van Tielhof defined “mother trade” as the oldest and most substantial trade with respect to ships, sailors and commodities for the Northern provinces. Over the long term, the Baltic grain trade gave rise to shipping and trade on other routes as well as to manufacturing industries.

Dutch Fishing

Along with agriculture, the Dutch fishing industry formed part of the economic base of the northern Netherlands. Like the Baltic grain trade, it also contributed to the rise of Dutch the shipping industry.

The backbone of the fishing industry was the North Sea herring fishery, which was quite advanced and included a form of “factory” ship called the herring bus. The herring bus was developed in the fifteenth century in order to allow the herring catch to be processed with salt at sea. This permitted the herring ship to remain at sea longer and increased the range of the herring fishery. Herring was an important export product for the Netherlands particularly to inland areas, but also to the Baltic offsetting Baltic grain imports.

The herring fishery reached its zenith in the first half of the seventeenth century. Estimates put the size of the herring fleet at roughly 500 busses and the catch at about 20,000 to 25,000 lasts (roughly 33,000 metric tons) on average each year in the first decades of the seventeenth century. The herring catch as well as the number of busses began to decline in the second half of the seventeenth century, collapsing by about the mid-eighteenth century when the catch amounted to only about 6000 lasts. This decline was likely due to competition resulting from a reinvigoration of the Baltic fishing industry that succeeded in driving prices down, as well as competition within the North Sea by the Scottish fishing industry.

The Dutch Textile Industry

The heartland for textile manufacturing had been Flanders and Brabant until the onset of the Dutch Revolt around 1568. Years of warfare continued to devastate the already beaten down Flemish cloth industry. Even the cloth producing towns of the Northern Netherlands that had been focusing on producing the “new draperies” saw their output decline as a result of wartime interruptions. But textiles remained the most important industry for the Dutch Economy.

Despite the blow it suffered during the Dutch revolt, Leiden’s textile industry, for instance, rebounded in the early seventeenth century – thanks to the influx of textile workers from the Southern Netherlands who emigrated there in the face of religious persecution. But by the 1630s Leiden had abandoned the heavy traditional wool cloths in favor of a lighter traditional woolen (laken) as well as a variety of other textiles such as says, fustians, and camlets. Total textile production increased from 50,000 or 60,000 pieces per year in the first few years of the seventeenth century to as much as 130,000 pieces per year during the 1660s. Leiden’s wool cloth industry probably reached peak production by 1670. The city’s textile industry was successful because it found export markets for its inexpensive cloths in the Mediterranean, much to the detriment of Italian cloth producers.

Next to Lyons, Leiden may have been Europe’s largest industrial city at end of seventeenth century. Production was carried out through the “putting out” system, whereby weavers with their own looms and often with other dependent weavers working for them, obtained imported raw materials from merchants who paid the weavers by the piece for their work (the merchant retained ownership of the raw materials throughout the process). By the end of the seventeenth century foreign competition threatened the Dutch textile industry. Production in many of the new draperies (says, for example) decreased considerably throughout the eighteenth century; profits suffered as prices declined in all but the most expensive textiles. This left the production of traditional woolens to drive what was left of Leiden’s textile industry in the eighteenth century.

Although Leiden certainly led the Netherlands in the production of wool cloth, it was not the only textile producing city in the United Provinces. Amsterdam, Utrecht, Delft and Haarlem, among others, had vibrant textile industries. Haarlem, for example, was home to an important linen industry during the first half of the seventeenth century. Like Leiden’s cloth industry, Haarlem’s linen industry benefited from experienced linen weavers who migrated from the Southern Netherlands during the Dutch Revolt. Haarlem’s hold on linen production, however, was due more to its success in linen bleaching and finishing. Not only was locally produced linen finished in Haarlem, but linen merchants from other areas of Europe sent their products to Haarlem for bleaching and finishing. As linen production moved to more rural areas as producers sought to decrease costs in the second half of the seventeenth century, Haarlem’s industry went into decline.

Other Dutch Industries

Industries also developed as a result of overseas colonial trade, in particular Amsterdam’s sugar refining industry. During the sixteenth century, Antwerp had been Europe’s most important sugar refining city, a title it inherited from Venice once the Atlantic sugar islands began to surpass Mediterranean sugar production. Once Antwerp fell to Spanish troops during the Revolt, however, Amsterdam replaced it as Europe’s dominant sugar refiner. The number of sugar refineries in Amsterdam increased from about 3 around 1605 to about 50 by 1662, thanks in no small part to Portuguese investment. Dutch merchants purchased huge amounts of sugar from both the French and the English islands in the West Indies, along with a great deal of tobacco. Tobacco processing became an important Amsterdam industry in the seventeenth century employing large numbers of workers and leading to attempts to develop domestic tobacco cultivation.

With the exception of some of the “colonial” industries (sugar, for instance), Dutch industry experienced a period of stagnation after the 1660s and eventual decline beginning around the turn of the eighteenth century. It would seem that as far as industrial production is concerned, the Dutch Golden Age lasted from the 1580s until about 1670. This period was followed by roughly one hundred years of declining industrial production. De Vries and van der Woude concluded that Dutch industry experienced explosive growth after 1580s because of the migration of skilled labor and merchant capital from the southern Netherlands at roughly the time Antwerp fell to the Spanish and because of the relative advantage continued warfare in the south gave to the Northern Provinces. After the 1660s most Dutch industries experienced either steady or steep decline as many Dutch industries moved from the cities into the countryside, while some (particularly the colonial industries) remained successful well into the eighteenth century.

Dutch Shipping and Overseas Commerce

Dutch shipping began to emerge as a significant sector during the fifteenth century. Probably stemming from the inaction on the part of merchants from the Southern Netherlands to participate in seaborne transport, the towns of Zeeland and Holland began to serve the shipping needs of the commercial towns of Flanders and Brabant (particularly Antwerp ). The Dutch, who were already active in the North Sea as a result of the herring fishery, began to compete with the German Hanseatic League for Baltic markets by exporting their herring catches, salt, wine, and cloth in exchange for Baltic grain.

The Grain Trade

Baltic grain played an essential role for the rapidly expanding markets in western and southern Europe. By the beginning of the sixteenth century the urban populations had increased in the Low Countries fueling the market for imported grain. Grain and other Baltic products such as tar, hemp, flax, and wood were not only destined for the Low Countries, but also England and for Spain and Portugal via Amsterdam, the port that had succeeded in surpassing Lübeck and other Hanseatic towns as the primary transshipment point for Baltic goods. The grain trade sparked the development of a variety of industries. In addition to the shipbuilding industry, which was an obvious outgrowth of overseas trade relationships, the Dutch manufactured floor tiles, roof tiles, and bricks for export to the Baltic; the grain ships carried them as ballast on return voyages to the Baltic.

The importance of the Baltic markets to Amsterdam, and to Dutch commerce in general can be illustrated by recalling that when the Danish closed the Sound to Dutch ships in 1542, the Dutch faced financial ruin. But by the mid-sixteenth century, the Dutch had developed such a strong presence in the Baltic that they were able to exact transit rights from Denmark (Peace of Speyer, 1544) allowing them freer access to the Baltic via Danish waters. Despite the upheaval caused by the Dutch and the commercial crisis that hit Antwerp in the last quarter of the sixteenth century, the Baltic grain trade remained robust until the last years of the seventeenth century. That the Dutch referred to the Baltic trade as their “mother trade” is not surprising given the importance Baltic markets continued to hold for Dutch commerce throughout the Golden Age. Unfortunately for Dutch commerce, Europe ‘s population began to decline somewhat at the close of the seventeenth century and remained depressed for several decades. Increased grain production in Western Europe and the availability of non-Baltic substitutes (American and Italian rice, for example) further decreased demand for Baltic grain resulting in a downturn in Amsterdam ‘s grain market.

Expansion into African, American and Asian Markets – “World Primacy”

Building on the early successes of their Baltic trade, Dutch shippers expanded their sphere of influence east into Russia and south into the Mediterranean and the Levantine markets. By the turn of the seventeenth century, Dutch merchants had their eyes on the American and Asian markets that were dominated by Iberian merchants. The ability of Dutch shippers to effectively compete with entrenched merchants, like the Hanseatic League in the Baltic, or the Portuguese in Asia stemmed from their cost cutting strategies (what de Vries and van der Woude call “cost advantages and institutional efficiencies,” p. 374). Not encumbered by the costs and protective restrictions of most merchant groups of the sixteenth century, the Dutch trimmed their costs enough to undercut the competition, and eventually establish what Jonathan Israel has called “world primacy.”

Before Dutch shippers could even attempt to break in to the Asian markets they needed to first expand their presence in the Atlantic. This was left mostly to the émigré merchants from Antwerp, who had relocated to Zeeland following the Revolt. These merchants set up the so-called Guinea trade with West Africa, and initiated Dutch involvement in the Western Hemisphere. Dutch merchants involved in the Guinea trade ignored the slave trade that was firmly in the hands of the Portuguese in favor of the rich trade in gold, ivory, and sugar from São Tomé. Trade with West Africa grew slowly, but competition was stiff. By 1599, the various Guinea companies had agreed to the formation of a cartel to regulate trade. Continued competition from a slew of new companies, however, insured that the cartel would be only partially effective until the organization of the Dutch West India Company in 1621 that also held monopoly rights in the West Africa trade.

The Dutch at first focused their trade with the Americas on the Caribbean. By the mid-1590s only a few Dutch ships each year were making the voyage across the Atlantic. When the Spanish instituted an embargo against the Dutch in 1598, shortages in products traditionally obtained in Iberia (like salt) became common. Dutch shippers seized the chance to find new sources for products that had been supplied by the Spanish and soon fleets of Dutch ships sailed to the Americas. The Spanish and Portuguese had a much larger presence in the Americas than the Dutch could mount, despite the large number vessels they sent to the area. Dutch strategy was to avoid Iberian strongholds while penetrating markets where the products they desired could be found. For the most part, this strategy meant focusing on Venezuela, Guyana, and Brazil. Indeed, by the turn of the seventeenth century, the Dutch had established forts on the coasts of Guyana and Brazil.

While competition between rival companies from the towns of Zeeland marked Dutch trade with the Americas in the first years of the seventeenth century, by the time the West India Company finally received its charter in 1621 troubles with Spain once again threatened to disrupt trade. Funding for the new joint-stock company came slowly, and oddly enough came mostly from inland towns like Leiden rather than coastal towns. The West India Company was hit with setbacks in the Americas from the very start. The Portuguese began to drive the Dutch out of Brazil in 1624 and by 1625 the Dutch were loosing their position in the Caribbean as well. Dutch shippers in the Americas soon found raiding (directed at the Spanish and Portuguese) to be their most profitable activity until the Company was able to establish forts in Brazil again in the 1630s and begin sugar cultivation. Sugar remained the most lucrative activity for the Dutch in Brazil, and once the revolt of Portuguese Catholic planters against the Dutch plantation owners broke out the late 1640s, the fortunes of the Dutch declined steadily.

The Dutch faced the prospect of stiff Portuguese competition in Asia as well. But, breaking into the lucrative Asian markets was not just a simple matter of undercutting less efficient Portuguese shippers. The Portuguese closely guarded the route around Africa. Not until roughly one hundred years after the first Portuguese voyage to Asia were the Dutch in a position to mount their own expedition. Thanks to the travelogue of Jan Huyghen van Linschoten, which was published in 1596, the Dutch gained the information they needed to make the voyage. Linschoten had been in the service of the Bishop of Goa, and kept excellent records of the voyage and his observations in Asia.

The United East India Company (VOC)

The first few Dutch voyages to Asia were not particularly successful. These early enterprises managed to make only enough to cover the costs of the voyage, but by 1600 dozens of Dutch merchant ships made the trip. This intense competition among various Dutch merchants had a destabilizing effect on prices driving the government to insist on consolidation in order to avoid commercial ruin. The United East India Company (usually referred to by its Dutch initials, VOC) received a charter from the States General in 1602 conferring upon it monopoly trading rights in Asia. This joint stock company attracted roughly 6.5 million florins in initial capitalization from over 1,800 investors, most of whom were merchants. Management of the company was vested in 17 directors (Heren XVII) chosen from among the largest shareholders.

In practice, the VOC became virtually a “country” unto itself outside of Europe, particularly after about 1620 when the company’s governor-general in Asia, Jan Pieterszoon Coen, founded Batavia (the company factory) on Java. While Coen and later governors-general set about expanding the territorial and political reach of the VOC in Asia, the Heren XVII were most concerned about profits, which they repeatedly reinvested in the company much to the chagrin of investors. In Asia, the strategy of the VOC was to insert itself into the intra-Asian trade (much like the Portuguese had done in the sixteenth century) in order to amass enough capital to pay for the spices shipped back to the Netherlands. This often meant displacing the Portuguese by waging war in Asia, while trying to maintain peaceful relations within Europe.

Over the long term, the VOC was very profitable during the seventeenth century despite the company’s reluctance to pay cash dividends in first few decades (the company paid dividends in kind until about 1644). As the English and French began to institute mercantilist strategies (for instance, the Navigation Acts of 1551 and 1660 in England, and import restrictions and high tariffs in the case of France ) Dutch dominance in foreign trade came under attack. Rather than experience a decline like domestic industry did at the end of the seventeenth century, the Dutch Asia trade continued to ship goods at steady volumes well into the eighteenth century. Dutch dominance, however, was met with stiff competition by rival India companies as the Asia trade grew. As the eighteenth century wore on, the VOC’s share of the Asia trade declined significantly compared to its rivals, the most important of which was the English East India Company.

Dutch Finance

The last sector that we need to highlight is finance, perhaps the most important sector for the development of the early modern Dutch economy. The most visible manifestation of Dutch capitalism was the exchange bank founded in Amsterdam in 1609; only two years after the city council approved the construction of a bourse (additional exchange banks were founded in other Dutch commercial cities). The activities of the bank were limited to exchange and deposit banking. A lending bank, founded in Amsterdam in 1614, rounded out the financial services in the commercial capital of the Netherlands.

The ability to manage the wealth generated by trade and industry (accumulated capital) in new ways was one of the hallmarks of the economy during the Golden Age. As early as the fourteenth century, Italian merchants had been experimenting with ways to decrease the use of cash in long-distance trade. The resulting instrument was the bill of exchange developed as a way to for a seller to extend credit to a buyer. The bill of exchange required the debtor to pay the debt at a specified place and time. But the creditor rarely held on to the bill of exchange until maturity preferring to sell it or otherwise use it to pay off debts. These bills of exchange were not routinely used in commerce in the Low Countries until the sixteenth century when Antwerp was still the dominant commercial city in the region. In Antwerp the bill of exchange could be assigned to another, and eventually became a negotiable instrument with the practice of discounting the bill.

The idea of the flexibility of bills of exchange moved to the Northern Netherlands with the large numbers of Antwerp merchants who brought with them their commercial practices. In an effort to standardize the practices surrounding bills of exchange, the Amsterdam government restricted payment of bills of exchange to the new exchange bank. The bank was wildly popular with merchants; deposits increasing from just less than one million guilders in 1611 to over sixteen million by 1700. Amsterdam ‘s exchange bank flourished because of its ability to handle deposits and transfers, and to settle international debts.

By the second half of the seventeenth century many wealthy merchant families had turned away from foreign trade and began engaging in speculative activities on a much larger scale. They traded in commodity values (futures), shares in joint-stock companies, and dabbled in insurance and currency exchanges to name only a few of the most important ventures.

Conclusion

Building on its fifteenth- and sixteenth-century successes in agricultural productivity, and in North Sea and Baltic shipping, the Northern Netherlands inherited the economic legacy of the southern provinces as the Revolt tore the Low Countries apart. The Dutch Golden Age lasted from roughly 1580, when the Dutch proved themselves successful in their fight with the Spanish, to about 1670, when the Republic’s economy experienced a down-turn. Economic growth was very fast during until about 1620 when it slowed, but continued to grow steadily until the end of the Golden Age. The last decades of the seventeenth century were marked by declining production and loss of market dominance overseas.

John McDonald, Flinders University, Adelaide, Australia

The Domesday Survey of 1086 provides high quality and detailed information on the inputs, outputs and tax assessments of most English estates. This article describes how the data have been used to reconstruct the eleventh-century Domesday economy. By exploiting modern economic theory and statistical methods the reconstruction has led to a radically different assessment of the way in which the Domesday economy and fiscal system were organized. It appears that tax assessments were based on a capacity to pay principle subject to politically expedient concessions and we can discover who received lenient assessments and why. Penetrating questions can be asked about the economy. We can compare the efficiency of Domesday agricultural production with the efficiency of more modern economies, measure the productivity of inputs and assess the impact of feudalism and manorialism on economic activity. The emerging picture of a reasonably well organized economy and fair tax system contrasts with the assessment of earlier historians who saw the Normans as capable military and civil administrators but regarded the economy as haphazardly run and tax assessments as “artificial” or arbitrary. The next section describes the Survey, the contemporary institutional arrangements and the main features of Domesday agricultural production. Some key findings on the Domesday economy and tax system are then briefly discussed.

Domesday England and the Domesday Survey

William the Conqueror invaded England from France in 1066 and carried out the Domesday Survey twenty years later. By 1086, Norman rule had been largely consolidated, although only after rebellion and civil dissent had been harshly put down. The Conquest was achieved by an elite, and, although the Normans brought new institutions and practices, these were superimposed on the existing order. Most of the Anglo-Saxon aristocracy were eliminated, the lands of over 4,000 English lords passing to less than 200 Norman barons, with much of the land held by just a handful of magnates.

William ruled vigorously through the Great Council. England was divided into shires, or counties, which were subdivided into hundreds. There was a sophisticated and long established shire administration. The sheriff was the king’s agent in the county, royal orders could be transmitted through the county and hundred courts, and an effective taxation collection system was in place.

England was a feudal state. All land belonged to the king. He appointed tenants-in-chief, both lay and ecclesiastical, who usually held land in return for providing a quota of fully equipped knights. The tenants-in-chief might then grant the land to sub-tenants in return for rents or services, or work the estate themselves through a bailiff. Although the Survey records 112 boroughs, agriculture was the predominant economic activity, with stock rearing of greater importance in the south-west and arable farming more important in the east and midlands. Manorialism was a pervasive influence, although it existed in most parts of England in a modified form. On the manor the peasants worked the lord’s demesne in return for protection, housing, and the use of plots of land to cultivate their own crops. They were tied to the lord and the manor and provided a resident workforce. The demesne was also worked by slaves who were fed and housed by the lord.

The Domesday Survey was commissioned on Christmas day, 1085, and it is generally thought that work on summarizing the Survey was terminated with the death of William in September 1087. The task was facilitated by the availability of Anglo-Saxon hidage (tax) lists. The counties of England were grouped into (probably) seven circuits. Each circuit was visited by a team of commissioners, bishops, lawyers and lay barons who had no material interests in the area. The commissioners were responsible for circulating a list of questions to land holders, for subjecting the responses to a review in the county court by the hundred juries, often consisting of half Englishmen and half Frenchmen, and for supervising the compilation of county and circuit returns. The circuit returns were then sent to the Exchequer in Winchester where they were summarized, edited and compiled into Great Domesday Book.

Unlike modern surveys, individual questionnaire responses were not treated confidentially but became public knowledge, being verified in the courts by landholders with local knowledge. In such circumstances, the opportunities for giving false or misleading evidence were limited.

Domesday Book consists of two volumes, Great (or Exchequer) Domesday and Little Domesday. Little Domesday is a detailed original survey circuit return of circuit VII, Essex, Norfolk and Suffolk. Great Domesday is a summarized version of the other circuit returns sent to the King’s treasury in Winchester. (It is thought that the death of William occurred before Essex and East Anglia could be included in Great Domesday.) The two volumes contain information on the net incomes or outputs (referred to as the annual values), tax assessments and resources of most manors in England in 1086, some information for 1066, and sometimes also for an intermediate year. The information was used to revise tax assessments and document the feudal structure, “who held what, and owed what, to whom.”

Taxation

The Domesday tax assessments relate to a non-feudal tax, the geld, thought to be levied annually by the end of William’s reign. The tax can be traced back to the danegeld, and, although originally a land tax, by Norman times, it was more broadly based and a significant impost on landholders.

There is an extensive literature on the Norman tax system, much of it influenced by Round (1895), who considered the assessments to be “artificial,” in the sense that they were imposed from above via the county and hundred with little or no consideration of the capacity of an individual estate to pay the tax. Round largely based his argument on an unsystematic and subjective review of the distribution of the assessments across estates, vills and the hundreds of counties.

In (1985a) and (1986, Ch. 4), Graeme Snooks and I argued that, contrary to Round’s hypothesis, the tax assessments were based on a capacity to pay principle, subject to some politically expedient tax concessions. Similar tax systems operate in most modern societies and reflect an attempt to collect revenue in a politically acceptable way. We found empirical support for the hypothesis, using statistical methods. We showed, for example, that for Essex lay estates about 65 percent of variation in the tax assessments could be attributed to variations in manorial net incomes or manorial resources, two alternative ways of measuring capacity to pay. Similar results were obtained for other counties. Capacity to pay explains from 64 to 89 percent of variation in individual estate assessment data for the counties of Buckinghamshire, Cambridgeshire, Essex and Wiltshire, and from 72 to 81 percent for aggregate data for 29 counties (see McDonald and Snooks, 1987a). The estimated tax relationships capture the main features of the tax system.

Capacity to pay explains most variation in tax assessments, but some variation remains. Who and which estates were treated favorably? And what factors were associated with lenient taxation? These issues were investigated in McDonald (1998) where frontier methods were used to derive a measure of how favorable the tax assessments were for each Essex lay estate. (The frontier methods, also known as “data envelopment analysis,” use the tax and income observations to trace out an outer bound, or frontier, for the tax relationship.) Estates, tenants-in-chief and local areas (hundreds) of the county with lenient assessments were identified, and statistical methods used to discover factors associated with favorable assessments. Some significant factors were the tenant-in-chief holding the estate (assessments tended to be less beneficial for the tenants-in-chief holding a large number of estates in Essex), the hundred location (some hundreds receiving more favorable treatment than others), proximity to an urban center (estates remote from the urban centers being more favorably treated), economic size of the estate (larger estates being less favorably treated) and tenure (estates held as sub-tenancies having more lenient assessments). The results suggest a similarity with more modern tax systems, with some groups and activities receiving minor concessions and the administrative process inducing some unevenness in the assessments. Although many details of the tax system have been lost in the mists of time, careful analysis of the Survey data has enabled us to rediscover its main features.

Production

Since Victorian times historians have used Domesday Book to study the political, institutional and social structures and the geography of Domesday England. However, the early scholars tended to draw away from economic issues. They were unable to perceive that systematic economic relationships were present in the Domesday economy, and, in contrast to their view that the Normans displayed considerable ability in civil administration and military matters, economic production was regarded as poorly organized (see McDonald and Snooks, 1985a, 1985b and 1986, especially Ch 3). One explanation why the Domesday scholars were unable to discover consistent relationships in the economy lies in the empirical method they adopted. Rather than examining the data as a whole using statistical techniques, conclusions were drawn by generalizing from a few (often atypical) cases. It is not surprising that no consistent pattern was evident when data were restricted to a few unusual observations. It would also appear that the researchers often did not have a firm grasp of economic theory (for example, seemingly being perplexed that the same annual value, that is, net output, could be generated by estates with different input mixes, see McDonald and Snooks, 1986, Ch. 3).

In McDonald and Snooks (1986), using modern economic and statistical methods, Graeme Snooks and I reanalyzed manorial production relationships. The study shows that strong relationships existed linking estate net output to inputs. We estimated manorial production functions which indicate many interesting characteristics of Domesday production: returns to scale were close to constant, oxen plough teams and meadowland were prized inputs in production but horses contributed little, and villans, bordars and slaves (the less free workers) contributed far more than freemen and sokemen ( the more free) to the estate’s net output. The evidence suggested that in many ways Domesday landholders operated in a manner similar to modern entrepreneurs. Unresolved by this research was the question of how similar was the pattern of medieval and modern economic activity. In particular, how well organized was estate production?

Clearly, in an absolute sense Domesday estate production was inefficient. With modern technology, using, for example, motorized tractors, output could have been increased many-fold. A more interesting question is: Given the contemporary technology and institutions, how efficient was production?

In McDonald (1998) frontier methods were used to measure best practice, given the economic environment. We then measured how far, on average, estate production was below the best practice frontier. Providing some estates were effectively organized, so that best practice was good practice, this will be a useful measure. If many estates were run haphazardly and ineffectively, average efficiency will be low and efficiency dispersion measures large. Comparisons with average efficiency levels in similar production situations will give an indication of whether Domesday average efficiency was unusually low.

A large number of efficiency studies have been reported in the literature. Three case studies with characteristics similar to Domesday production are Hall’s (1975) study of agriculture after the Civil War in the American South, Hall and LeVeen’s (1978) analysis of small Californian farms and Byrnes, Färe, Grosskopf and Lovell’s (1988) study of American surface coalmines. For all three studies the individual establishment is the production unit, the economic activity is unsophisticated primary production and similar frontier methods are used to measure efficiency.

The comparison studies suggest that efficiency levels varied less across Domesday estates than they did among postbellum Southern farms and small Californian farms in the 1970s (and were very similar for Domesday estates and US surface coalmines). Certainly, the average Domesday estate efficiency level does not appear to be unusually low when compared with average efficiency levels in similar production situations.

In McDonald (1998) estate efficiency measures are also used to examine details of production on individual estates and statistical methods employed to find factors associated with efficiency. Some of these include the estate’s tenant-in-chief (some tenants-in-chief displayed more entrepreneurial flair than others), the size of the estate (larger estates, using inputs in different proportions to smaller estates, tended to be more efficient) and the kind of agriculture undertaken (estates specialized in grazing were more efficient).

Largely through the influences of feudalism and manorialism, Domesday agriculture suffered from poorly developed factor markets and considerable immobility of inputs. Although there were exceptions to the rule, as a first approximation, manorial production can be characterized in terms of estates worked by a residential labor force using the resources, which were available on the estate.

Input productivity depends on the mix of inputs used in production, and with estates endowed with widely different resource mixes, one might expect that input productivities would vary greatly across estates. The frontier analysis generates input productivity measures (shadow prices), and these confirm this expectation — indeed on many estates some inputs made very little contribution to production. The frontier analysis also allows us to estimate the economic cost of input rigidity induced by the feudal and manorial arrangements. The calculation indicates that if inputs had been mobile among estates an increase in total net output of 40.1 percent would have been possible. This potential loss in output is considerable. The frontier analysis indicates the loss in total net output resulting from estates not being fully efficient was 51.0 percent. The loss in output due to input rigidities is smaller, but of a similar order of magnitude.

Domesday Book is indeed a rich data source. It is remarkable that so much can be discovered about the English economy almost one thousand years ago.

Further reading

Background information on Domesday England is contained in McDonald and Snooks (1986, Ch. 1 and 2; 1985a, 1985b, 1987a and 1987b) and McDonald (1998). For more comprehensive accounts of the history of the period see Brown (1984), Clanchy (1983), Loyn (1962), (1965), (1983), Stenton (1943), and Stenton (1951). Other useful references include Ballard (1906), Darby (1952), (1977), Galbraith (1961), Hollister (1965), Lennard (1959), Maitland (1897), Miller and Hatcher (1978), Postan (1966), (1972), Round (1895), (1903), the articles in Williams (1987) and references cited in McDonald and Snooks (1986). The Survey is discussed in McDonald and Snooks (1986, sec. 2.2), the references cited there, and the articles in Williams (1987). The Domesday and modern surveys are compared in McDonald and Snooks (1985c).
The reconstruction of the Domesday economy is described in McDonald and Snooks (1986). Part 1 contains information on the basic tax and production relationships and Part 2 describes the methods used to estimate the relationships. The tax and production frontier analysis and efficiency comparisons are described in McDonald (1998). The book also explains the frontier methodology. A series of articles describe features of the research to different audiences: McDonald and Snooks (1985a, 1985b, 1987a, 1987b), economic historians; McDonald (2000), economists; McDonald (1997), management scientists; McDonald (2002), accounting historians (who recognize that Domesday Book possesses many attributes of an accounting record); and McDonald and Snooks (1985c), statisticians. Others who have made important contributions to our understanding of the Domesday economy include Miller and Hatcher (1978), Harvey (1983) and the contributors to the volumes edited by Aston (1987), Holt (1987), Hallam (1988) and Britnell and Campbell (1995).

David T Flynn, University of North Dakota

Overview of Credit versus Barter and Cash

Credit was vital to the economy of colonial America and much of the individual prosperity and success in the colonies was due to credit. Networks of credit stretched across the Atlantic from Britain to the major port cities and into the interior of the country allowing exchange to occur (Bridenbaugh, 1990, 154). Colonists made purchases by credit, cash and barter. Barter and cash were spot exchanges, goods and services were given in exchange for immediate payment. Credit, however, delayed the payment until a later date. Understanding the role of credit in the eighteenth century requires a brief discussion of all payment options as well as the nature of the repayment of credit.

Barter

Barter is an exchange of goods and services for other goods and services and can be a very difficult method of exchange due to the double coincidence of wants. For exchange to occur in a barter situation each party must have the good desired by its trading partner. Suppose John Hancock has paper supplies and wants corn while Paul Revere has silver spoons and wants paper products. Even though Revere wants the goods available from Hancock no exchange occurs because Hancock does not want the good Revere has to offer. The double coincidence of wants can make barter very costly because of time spent searching for a trading partner. This time could otherwise be used for consumption, production, leisure, or any number of other activities. The principle advantage of any form of money over barter is obvious: money satisfies the double coincidence of wants, that is, money functions as a medium of exchange.

Money’s advantages

Money also has other functions that make it a superior method of exchange to barter including acting as the unit of account (the unit in which prices are quoted) in the economy (e.g. the dollar in the United States and the pound in England). A barter economy uses a large number of prices because every good must have a price in terms of each other good available in the economy. An economy with n different goods would have n(n-1)/2 prices in total, not an enormous burden for small values of n, but as n grows it quickly becomes unmanageable. A unit of account reduces the number of prices from the barter situation to n, or the number of goods. The colonists had a unit of account, the colonial pound (£), which removed this burden of barter.

Several forms of money circulated in the colonies over the course of the seventeenth and eighteenth centuries, such as specie, commodity money and paper currency. Specie is gold or silver minted into coins and is a special form of commodity money, a good that has an exchange value separate from the market value of the good. Tobacco, and later tobacco warehouse receipts, acted as a form of money in many of the colonies. Despite multiple money options some colonists complained of an inability to keep money in circulation, or at least in the hands of those wanting to use it for exchange (Baxter, 1945, 11-17; Bridenbaugh, 153).1

Credit’s advantages

When you acquire goods with credit you delay payment to a later time, be it one day or one year. A basic credit transaction today is essentially the same as in the eighteenth century, only the form is different.2 Extending credit presents risks, most notably default, or the failure of the borrower to repay the amount borrowed. Sellers also needed to worry about the total volume of credit they extended because it threatened their solvency in the case of default. Consumers benefited from credit by the ability to consume beyond current financial resources, as well as security from theft and other advantages. Sellers gained by faster sales of goods and interest charges, often hidden in a higher price for the goods.3

Uncertainty about the scope of credit

The frequency of credit versus barter and cash is not well quantified because surviving account books and transaction records generally only report cash or goods payments made after the merchant allowed credit, not spot cash or barter transactions (Baxter, 19n). Martin (1939, 150) concurs, “The entries represent transactions with those customers who did not pay at once on purchasing goods for [the seller] either made no record of immediate cash purchases, or else there were almost no such transactions.” The results of Flynn’s (2001) study using merchant account books from Connecticut and Massachusetts found also that most purchases recorded in the account books were credit purchases (see Table 1 below).4 Scholars are forced to make general statements about credit as a standard tool in transactions in port cities and rural villages without reference to specific numbers (Perkins, 1980, 123-124).

Table 1

Percentage of Purchases by Type

Purchases by Credit

Purchases by Cash

Purchases by Barter

Connecticut

98.6

1.1

0.3

Massachusetts

98.5

1.0

0.4

Combined

98.6

1.0

0.4

Source: Adapted from Table 3.2 in Flynn (2001), p. 54.

Indications of the importance of credit

In some regions, the institution of credit was so accepted that many employers, including merchants, paid their employees by providing them credit at a store on the business’s account (Martin, 94). Probate inventories evidence the frequency of credit through the large amount of accounts receivable recorded for traders and merchant in Connecticut, sometimes over £1,000 (Main, 1985, 302-303). Accounts receivable are an asset of the business representing amounts owed to the business by other parties. Almost 30 percent of the estates of Connecticut “traders” contained £100 or more of receivables as part of their estate (Main, 316). More than this, accounts receivable averaged one-eighth of personal wealth throughout most of the colonial period, and more than one-fifth at the end (Main, 36). While there is no evidence that enables us to determine the relative frequencies of payments, the available information supports the idea that the different forms of payment co-existed.

The Different Types of Credit

There are three different types of credit to discuss: international credit, book credit, and promissory notes and each facilitated exchange and payments. Colonial importers and wholesalers relied on credit from British suppliers while rural merchants received credit from importers and wholesalers in the port cities and, finally, consumers received credit from the retailers. A discussion starts logically with international credit from British suppliers to colonial merchants because it allowed colonial merchants to extend credit to their customers (McCusker and Menard, 1985, 80n; Martin, 1939, 19; Perkins, 1980, 24).

Overseas credit

Research on colonial growth attaches importance to several items including foreign funds, capital improvements and productivity gains. The majority of foreign funds transferred were in the form of mercantile credit (Egnal, 1998, 12-20). British merchants shipped goods to colonial merchants on credit for between six months and one year before demanding payment or charging interest (Egnal, 55; Perkins, 1994, 65; Shepherd and Walton, 1972, 131-132; Thomson, 1955, 15). Other examples show a minimum of one year’s credit given before suppliers assessed five percent interest charges (Martin, 122-123). Factors such as interest and duration determined for how long colonial merchants could extend credit to their own customers and at what level of markup. Some merchants sold goods on commission, where the goods remained the property of the British merchant until sold. After the sale the colonial merchant remitted the funds, less his fee, to the British merchant.

Relationships between colonial and British merchants exhibited regional differences. Virginia merchants’ system of exchange, known as the consignment system, depended on the credit arrangements between planters and “factors” – middlemen who accepted colonial goods and acquired British or other products desired by colonists (Thomson, 28). A relationship with a British merchant was important for success in business because it provided the tobacco growers and factors access to supplies of credit sufficient to maintain business (Thomson, 211). Independent Virginia merchants, those without a British connection, ordered their supplies of goods on credit and paid with locally produced goods (Thomson, 15). Virginia and other Southern colonies could rely on credit because of their production of a staple crop desired by British merchants. New England merchants such as Thomas Hancock, uncle of the famous patriot John Hancock, could not rely on this to the same extent. New England merchants sometimes engaged in additional exchanges with other colonies and countries because they lacked goods desired by British merchants (Baxter, 46-47). Without the willingness of British merchant houses to wait for payment it would have been difficult for many colonial merchants to extend credit to their customers.

Domestic credit: book credit and promissory notes

Domestic credit was primarily of two forms, book credit and promissory notes. Merchants recorded book credit in the account books of the business. These entries were debits for an individual’s account and were set against payments, credits in the merchant’s ledger. Promissory notes detailed a debt, including typically the date of issue, the date of redemption, the amount owed, possibly the form of repayment and an interest rate. Book credit and promissory notes were substitutes and complements. Both represented a delay of payment and could be used to acquire goods but book accounts were also a large source of personal notes. Merchants who felt payment was either too slow in coming or the risks of default too high could insist the buyer provide a note. The note was a more secure form of credit as it could be exchanged and, despite the likely loss on the note’s face value if the debtor was in financial trouble, would not represent a continuing worry of the merchant (Martin, 158-159).5

The settlement of debt obligations incorporated many forms of payment. Figure 1 details the activity between Samuell Maxey and Jonathan Parker, a Massachusetts merchant. Included are several purchases of earthenware by Maxey and others and several payments, including some in cash and goods as well as from third parties. Baxter (1945, 21) describes similar experiences when he says,

…the accounts over and over again tell of the creditor’s weary efforts to get his dues by accepting a tardy and halting series of odds and ends; and (as prices were often soaring, especially in 1740-64) the longer a debtor could put off payment, the fewer goods might he need to hand over to square a liability for so much money.

Repayment means and examples

The “odds and ends” included goods and commodity money as well as other cash, bills of exchange, and third party settlements (Baxter, 17-32). Merchants accepted goods such as pork beef, fish and grains for their store goods (Martin, 94). Flynn (2001) shows several items offered as payment, including goods, cash, notes and others, shown in Table 2.

Table 2

Percentage of Payments by Category

Repayment in Cash

Repayment in Goods

Repayment by note

Repayment by Reckoning

Repayment by third- party note

Repayment by Bond

Repayment by Labor

Conn.

27.5

45.9

3.3

7.5

6.9

0.0

8.9

Mass.

24.2

47.6

2.8

7.5

13.7

0.2

2.3

Combined

25.6

46.9

3.0

7.5

10.9

0.1

5.0

Source: Adapted from Table 3.4 in Flynn (2001), p. 54.

Cash, goods and notes require no further explanation, but Table 2 shows other items used in payment as well. Colonists used labor to repay their tabs, working in their creditor’s field or lending the labor services of a child or yoke of oxen. Some accounts also list “reckoning,” which occurred typically between two merchants or traders that made purchases on credit from each other. Before the two merchants settled their accounts it was convenient to determine the net position of their accounts with each other. After making the determination the merchant in debt possibly made a payment that brought the balance to zero, but at other times the merchants proceeded without a payment but a better sense of the account position. Third parties also made payments that employed goods, money and credit. When the merchant did not want the particular goods offered in payment he could hope to pass them on, ideally to his own creditors. Such exchange satisfied both the merchant’s debts and the consumer’s (Baxter, 24-25). Figure 1 above and Figure 2 below illustrate this.

The accounts of Parker and his customer, Mr. Clark, show another purchase of earthenware and three payments. The purchase is clearly on credit as Parker recorded the first payment occurring over two months after the purchase. Clark provided two cash payments and then a third person Mr. Blanchard settled Clark’s account in full with rum. What do these third party payments represent? For answers to this we need to step back from the specifics of the account and generalize.

Figures 1 and 2 show credits from third parties in cash and goods. If we think in terms of three-way trade the answer becomes obvious. In Figure 1 where a Mr. Suttin pays £5.00 cash to Parker on the account of Samuell Maxey, Suttin is settling a debt with Maxey (in part or in full we do not know). To settle the debt he owes Parker, Maxey directs those who owe him money to pay Parker, and thus reduce his debt. Figure 2 displays the same type of activity, except Blanchard pays with rum. Though not depicted here, private debts between customers could be settled on the merchant’s books. Rather than offering payment in cash or goods, private parties could swap debt on the merchant’s account book, ordering a transfer from one account to another. The merchant’s final approval for the exchange implied something about the added risk from a third party exchange. The new person did not pose a greater default risk in the creditor’s opinion, otherwise (we would suspect) they refused the exchange.6

Complexity of the credit system

The payment system in the colonies was complex and dynamic with creditors allowing debtors to settle accounts in several fashions. Goods and money satisfied outstanding debts and other credit obligations deferred or transferred debts. Debtors and creditors employed the numerous forms of payment in regular and third party transactions, making merchants’ account books a clearinghouse for debts. Although the lack of technology leaves casual observers thinking payments at this time were primitive, such was clearly not the case. With only pen and paper eighteenth century merchants developed a sophisticated payment system, of which book credit and personal notes were an important part.

The Duration of Credit

The length of time outstanding for credit, its duration, is an important characteristic. Duration represents the amount of time a creditor awaited payment and anecdotal and statistical evidence provide some insights into the duration of book credit and promissory notes.

The calculation of the duration of book credit, or any similar type of instrument, is relatively straightforward when the merchant recorded dates in his account book conscientiously. Consider the following example.

Figure 3

Accounts of David Forthingham, Customer, and Jonathan Parker, Massachusetts Merchant

The exchanges between Frothingham and Jonathan Parker show one purchase and two payments. Frothingham provides a partial payment for the earthenware at the time of purchase, in cash. However, £4.75 of debt remains outstanding, and is not repaid until April of 1749. It is possible to calculate a range of values for the final settlement of this account, using the first day of April to give a lower bound estimate and the last day to give an upper bound estimate. Counting the number of days shows that it took at least 182 days and at most 211 days to settle the debt. Alternatively the debt lasted between 6 and 7 months.

Not all merchants were meticulous record keepers and sometimes they failed to record a particular date with the rest of an account book entry.8 Figure 4 illustrates this problem well and also provides an example of multiple purchases along with multiple payments. The first purchase of earthenware is repaid with one “cash” payment sixty-three days (2.1 months) later.9 Computation of the term of the second loan is more complicated. The last two payments satisfy the purchase amount, so Adams repaid the loan completely. Unfortunately, Parker left out the date for the second payment. The second payment occurred on or after July 22, 1748, so this date is the lower end of the interval. The minimum time between purchase and second payment is zero days, but computation of a maximum time, or upper bound, is not possible due to the lack of information.10

With a sufficient number of debts some generalization is possible. If we interpret the data as the length of a debt’s life we can use demographic methods, in particular the life table.11 For a sample of Connecticut and Massachusetts account books the average duration looks like the following.12

Table 3

Expected Duration for Connecticut Debts, Lower and Upper Bound

(a)

(b)

(c)

(d)

(e)

Size of debt in £

eo lower bound (months)

Median lower bound (interval)

eo upper bound (months)

Median upper bound (interval)

All Values

14.79

6-12

15.87

6-12

0.0-0.25

15.22

6-12

15.99

6-12

0.25-0.50

14.28

6-12

15.51

6-12

0.50-0.75

15.24

6-12

18.01

6-12

0.75-1.00

14.25

6-12

15.94

6-12

1.00-10.00

13.95

6-12

15.07

6-12

10.00+

7.95

0-6

10.73

6-12

Table 4

Expectation Duration for Massachusetts Debts, Lower and Upper Bound

(a)

(b)

(c)

(d)

(e)

Size of debt in £

eo lower bound (months)

Lower bound median (interval)

eo upper bound (months)

Upper bound median (interval)

All Values

13.22

6-12

14.87

6-12

0.0-0.25

14.74

6-12

17.55

12-18

0.25-0.50

12.08

6-12

12.80

6-12

0.50-0.75

11.73

6-12

13.08

6-12

0.75-1.00

11.01

6-12

12.43

6-12

1.00-10.00

13.08

6-12

13.88

6-12

10.00+

14.28

12-18

17.02

12-18

Source: Adapted from Tables 4.1 and 4.2 in Flynn (2001), p. 80.

For all debts in the sample from Connecticut, the expected length of time the debt is outstanding from its inception is estimated between 14.78 and 15.86 months. For Massachusetts the range is somewhat shorter, from 13.22 to 14.87 months. Tables 3 and 4 break the data into categories based on the value of the credit transaction as well. An important question to ask is whether this represents a long- term or a short-term debt? There is no standard yardstick for comparison in this case. The best comparison is likely the international credit granted to colonial merchants. The colonial merchants needed to repay these amounts and had to sell the goods to make remittances. The estimates of that credit duration, listed earlier, center around one year, which means that colonial merchants in New England needed to repay their British suppliers before they could expect to receive full payment from their customers. From the colonial merchants’ perspective book credit was certainly long-term.

Other estimates of duration of book credit

Other estimates of book credit’s duration vary. Consumers paying their credit purchases in kind took as little time as a few months or as long as several years (Martin, 153). Some accounting records show book credit remaining unsettled for nearly thirty years (Baxter, 161). Thomas Hancock often noted expected payment dates, such as “to pay in 6 months” along with a purchase, though frequently this was not enough time for the buyer. Thomas blamed the law, which allowed twelve months for people to make repayments, complaining to his suppliers that he often provided credit to country residents of “one two & more years” (Baxter, 192). Surely such a situation is the exception and not the rule, though it does serve to remind us that many of these arrangements were open, lacking definite endpoints. Some merchants allowed accounts to last as long as two years before examining the position of the account, allowing one year’s book credit without charge, and thereafter assessing interest (Martin, 157).

Duration of promissory notes

The duration of promissory notes is also important. Priest (1999) examines a form of duration for these credit instruments, estimating the time between a debtor’s signing of the note and the creditor’s filing of suit to collect payment. Of course this only measures the duration for notes that go into default and require legal recourse. Typically, a suit originated some 6 to 9 months after default (Priest, 2417-18). Results for the period 1724 to 1750 show 14.5% of cases occurred within 6 months after the initial contraction date, the execution of the debt. Merchants brought suit in more than 60% of the cases between 6 months and 3 years from execution, 21.4% from six to twelve months, 27.4% from one to two years and 14.1% from two to three years. Finally, more than 20% of the cases occurred more than three years from the execution of the debt. The median interval between execution and suit was 17.5 months (Priest, 2436, Table 3).

The duration of promissory notes provides an important complement to estimates of book credit’s term. Median estimates of 17.5 months make promissory notes, more than likely, a long-term credit instrument when balanced against the one year credit term given colonial importers. The estimates for book credit range between three months and several years in the literature to between 13 and 16 months in Flynn (2001) study. Duration results show that merchants waited significant amounts of time for payment, raising the issue of the time value of money and interest rates.

The Interest Practices of Merchants

In some cases credit was outstanding for a long period of time, but the accounts make no mention of any interest charges, as in Figures 1 through 4. Such an omission is difficult to reconcile with the fairly sophisticated business practices for the merchants of the day. Accounting research and manuals from the time demonstrate clearly an understanding of the time value of money. The business community understood the concept of compound interest. Account books allowed merchants to charge higher and variable prices for goods sold on book credit (Martin, 94). While in some cases interest charges entered the account book as an explicit entry in many others interest was an added or implicit charge contained in the good’s price.

Advertisements from the time make it clear that merchants charged less for goods

purchased by cash, and accounts paid promptly received a discount on the price,

One general pricing policy seems to have been that goods for cash were sold at a lower price than when they were charged. Cabel[sic] Bull advertised beaver hats at 27/ cash and 30/ country produce in hand. Daniel Butler of Northampton offered dyes, and “a few Cwt. of Redwood and Logwood cheaper than ever for ready money.” Many other advertisements carried allusions to the practice but gave no definite data. A daybook of the Ely store contained this entry for October 21, 1757: “William Jones, Dr to 6 yds Towcloth at 1/6—if paid in a month at 1/4. (Martin, 1939, 144-145)

Other advertisements also evidence a price difference, offering cash prices for certain grains they desired. Connecticut merchants likely offered good prices for products they thought would sell well as they sought remittances for their British creditors. Hartford merchants charged interest rates ranging from four and one-half to six and one-half percent in the 1750s and 1760s, though Flynn (2001) arrives at different rates from a different sample of New England account books (Martin, 158). Many promissory notes in South Carolina specified interest, though not an exact rate, usually just the term “lawful interest” (Woods, 364).

Estimates of interest rates

Simple regression analysis can help determine if interest was implicit in the price of goods sold on credit though there are numerous technical issues, such as borrower characteristics, market conditions and the quality of the good that make a discussion here inappropriate.13 In general, there seems to be a positive correlation, with the annual interest rates falling between 3.75% and 7%, which seem consistent with the results from interest entries made in account books. There is some tendency for the price of a good to increase with the time waited for repayment, though many other technical matters need resolution.

Most annual interest rates in Flynn’s (2001) study, explicit and implicit, fall in the range of 4 to 6.5 percent making them similar to those Martin found in her examination of accounts and roughly consistent with the Massachusetts lawful rate of 6 percent at the time, though some entries assess interest as high as 10 percent (Martin, 158; Rothenberg, 1992, 124). Despite this, the explicit rates are insufficient on their own to form a conclusion about the interest rate charged on book credit; there are too few entries, and many involve promissory notes or third parties, factors expected to alter the interest rate. Other factors such as borrower characteristics likely changed the assessed rate of interest too, with more prominent and wealthy individuals charged lower rates, either due to their status and a perceived lower risk, or possibly due to longer merchant-buyer relationships. Most account books do not contain information sufficient to judge the effects of these characteristics.

Merchants gained from credit use by charging higher prices; credit required a premium over cash sales and so the merchant collected interest and at the same time minimized the necessary amount of payments media (Martin, 94). Interest was distinct from the normal markups for insurance, freight, wharfage, etc. that were often significant additions to the overall price and represented an attempt to account for risk and the time value of money (Baxter, 192; Thomson, 239).14

Conclusions

Credit was significant as a form of payment in colonial America. Direct comparisons of the number of credit purchases versus barter or cash are not possible, but an examination of accounting records demonstrates credit’s widespread use. Credit was present in all forms of trade including international trade between England and her colonies. The domestic forms of credit were relatively long-term instruments that allowed individuals to consume beyond current means. In addition, book credit allowed colonists to economize on cash and other means of payment through transfers of credit, “reckoning,” and other means such as paying workers with store credit. Merchants also understood the time value of money, entering interest charges explicitly in the account books and implicitly as part of the price. The use of credit, the duration of credit instruments, and the methods of incorporating interest show credit as an important method of exchange and the economy of colonial America to be very complex and sophisticated.

References

Baxter, W.T. The House of Hancock: Business in Boston, 1724-1775. Cambridge: Harvard University Press, 1945.

Rothenberg, Winifred. From Market-Places to a Market Economy:The Transformation of Rural Massachusetts, 1750-1850. Chicago: University of Chicago Press, 1992.

Shepherd, James F. and Gary Walton. Shipping, Maritime Trade, and the Economic Development of Colonial North America. Cambridge: University Press 1972.

Thomson, Robert Polk. The Merchant in Virginia, 1700-1775. Ph.D. dissertation, University of Wisconsin, 1955.

Further Reading:

For a good introduction to credit’s importance across different professions, merchant practices and the development of business practices over time I suggest:

Bailyn, Bernard. The New England Merchants in the Seventeenth-Century. Cambridge: Harvard University Press, 1979.

Schlesinger, Arthur. The Colonial Merchants and the American Revolution: 1763-1776. New York: Facsimile Library Inc., 1939.

For an introduction to issues relating to money supply, the unit of account in the economy, and price and exchange rate data I recommend:

Brock, Leslie V. The Currency of the American Colonies, 1700-1764: A Study in Colonial Finance and Imperial Relations. New York: Arno Press, 1975.

McCusker, John J. Money and Exchange in Europe and America, 1600-1775: A Handbook. Chapel Hill: University of North Carolina Press, 1978.

McCusker, John J. How Much Is That in Real Money? A Historical Commodity Price Index for Use as a Deflator of Money Values in the Economy of the United States, Second Edition. Worcester, MA: American Antiquarian Society, 2001.

1 Some authors note a small amount of cash purchases as well as small numbers of cash payments for debts as evidence of a lack of money (Bridenbaugh, 153; Baxter, 19n).

2 Presently, credit cards are a common form of payment. While such technology did not exist in the past, the merchant’s account book provided a means of recording credit purchases.

3 Price (1980, pp.16-17) provides an excellent summary of the advantages and risks of credit to different types of consumers and to merchants in both Britain and the colonies.

4 Please note that this table consists of transactions mostly between colonial retail merchants and colonial consumers in New England. Flynn (2001) uses account books that collectively span from approximately 1704 to 1770.

5 In some cases with the extension of book credit came a requirement to provide a note too. When the solvency of the debtor came into question the creditor, could sell the note and pass the risk of default on to another.

6 I offer a detailed example of such an exchange going sour for the merchant below.

7 “No date” is Flynn’s entry to show that a date is not recorded in the account book.

8 It seems that this frequently occurs at the end of a list of entries, particularly when the credit fully satisfies an outstanding purchase as in Figure 4.

9 To calculate months, divide days by 30. The term “cash” is placed in quotation marks as it is woefully nondescript. Some merchants and researchers using account books group several different items under the heading cash.

10 Students interested in historical research of this type should be prepared to encounter many situations of missing information. There are ways to deal with this censoring problem, but a technical discussion is not appropriate here.

11 Colin Newell’s Methods and Models in Demography (Guilford Press, 1988) is an excellent introduction for these techniques.

12 Note that either merchants recorded amounts in the lawful money standard or Flynn (2001) converted amounts into this standard for these purposes.

13 The premise behind the regression is quite simple: we look for a correlation between the amount of time an amount was outstanding and the per unit price of the good. If credit purchases contained implicit interest charges there would be a positive relationship. Note that this test implies forward looking merchants, that is, merchants factored the perceived or agreed upon time to repayment into the price of the good.

In 1783, a Boston correspondent wrote Wadsworth that dry goods in Boston were selling at a twenty to twenty-five percent ‘advance’ from the ‘real Sterling Cost by Wholesale.’ The ‘advances’ occasionally mentioned in John Ely’s Day Book were far higher, seventy to seventy-five per cent on dry goods. Dry goods sold well at one hundred and fifty per cent ‘advance’ in New York in 1750… (Martin, 136).

In the 1720s a typical advance on piece goods in Boston was eighty per cent, seventy-five with cash (Martin, 136n). It should be noted that others find open account balances were commonly kept interest free (Rothenberg, 1992, 123).

B. Zorina Khan, Bowdoin College

Introduction

Copyright is a form of intellectual property that provides legal protection against unauthorized copying of the producer’s original expression in products such as art, music, books, articles, and software. Economists have paid relatively little scholarly attention to copyrights, although recent debates about piracy and “the digital dilemma” (free use of digital property) have prompted closer attention to theoretical and historical issues. Like other forms of intellectual property, copyright is directed to the protection of cultural creations that are nonrivalrous and nonexclusive in nature. It is generally proposed that, in the absence of private or public forms of exclusion, prices will tend to be driven down to the low or zero marginal costs and the original producer would be unable to recover the initial investment.

Part of the debate about copyright exists because it is still not clear whether state enforcement is necessary to enable owners to gain returns, or whether the producers of copyrightable products respond significantly to financial incentives. Producers of these public goods might still be able to appropriate returns without copyright laws or in the face of widespread infringement, through such strategies as encryption, cartelization, the provision of complementary products, private monitoring and enforcement, market segmentation, network externalities, first mover effects and product differentiation. Patronage, taxation, subsidies, or public provision, might also comprise alternatives to copyright protection. In some instances “authors” (broadly defined) might be more concerned about nonfinancial rewards such as enhanced reputations or more extensive diffusion.

During the past three centuries great controversy has always been associated with the grant of property rights to authors, ranging from the notion that cultural creativity should be rewarded with perpetual rights, through the complete rejection of any intellectual property rights at all for copyrightable commodities. However, historically, the primary emphasis has been on the provision of copyright protection through the formal legal system. Europeans have generally tended to adopt the philosophical position that authorship embodies rights of personhood or moral rights that should be accorded strong protections. The American approach to copyright has been more utilitarian: policies were based on a comparison of costs and benefits, and the primary emphasis of early copyright policies was on the advancement of public welfare. However, the harmonization of international laws has created a melding of these two approaches. The tendency at present is toward stronger enforcement of copyrights, prompted by the lobbying of publishers and the globalization of culture and commerce. Technological change has always exerted an exogenous force for change in copyright laws, and modern innovations in particular provoke questions about the extent to which copyright systems can respond effectively to such challenges.

Copyright in Europe

Copyright in France

In the early years of printing, books and other written matter became part of the public domain when they were published. Like patents, the grant of book privileges originated in the Republic of Venice in the fifteenth century, a practice which was soon prevalent in a number of other European countries. Donatus Bossius, a Milan author, petitioned the duke in 1492 for an exclusive privilege for his book, and successfully argued that he would be unjustly deprived of the benefits from his efforts if others were able to freely copy his work. He was given the privilege for a term of ten years. However, authorship was not required for the grant of a privilege, and printers and publishers obtained monopolies over existing books as well as new works. Since privileges were granted on a case by case basis, they varied in geographical scope, duration, and breadth of coverage, as well as in terms of the attendant penalties for their violation. Grantors included religious orders and authorities, universities, political figures, and the representatives of the Crown.

The French privilege system was introduced in 1498 and was well-developed by the end of the sixteenth century. Privileges were granted under the auspices of the monarch, generally for a brief period of two to three years, although the term could be as much as ten years. Protection was granted to new books or translations, maps, type designs, engravings and artwork. Petitioners paid formal fees and informal gratuities to the officials concerned. Since applications could only be sealed if the King were present, petitions had to be carefully timed to take advantage of his route or his return from trips and campaigns. It became somewhat more convenient when the courts of appeal such as the Parlement de Paris began to issue grants that were privileges in all but name, although this could lead to conflicting rights if another authority had already allocated the monopoly elsewhere. The courts sometimes imposed limits on the rights conferred, in the form of stipulations about the prices that could be charged. Privileges were property that could be assigned or licensed to another party, and their infringement was punished by a fine and at times confiscation of all the output of “pirates.”

After 1566, the Edict of Moulins required that all new books had to be approved and licensed by the Crown. Favored parties were able to get renewals of their monopolies that also allowed them to lay claim to works that were already in the public domain. By the late eighteenth century an extensive administrative procedure was in place that was designed to restrict the number of presses and engage in surveillance and censorship of the publishing industry. Manuscripts first had to be read by a censor, and only after a permit was requested and granted could the book be printed, although the permit could later be revoked if complaints were lodged by sufficiently influential individuals. Decrees in 1777 established that authors who did not alienate their property were entitled to exclusive rights in perpetuity. Since few authors had the will or resources to publish and distribute books, their privileges were likely to be sold outright to professional publishers. However, the law made a distinction in the rights accorded to publishers, because if the right was sold the privilege was only accorded a limited duration of at least ten years, the exact term to be determined in accordance with the value of the work, and once the publisher’s term expired, the work passed into the public domain. The fee for a privilege was thirty six livres. Approvals to print a work, or a “permission simple” which did not entail exclusive rights could also be obtained after payment of a substantial fee. Between 1700 and 1789, a total of 2,586 petitions for exclusive privileges were filed, and about two thirds were granted. The result was a system that resulted in “odious monopolies,” higher prices and greater scarcity, large transfers to officials of the Crown and their allies, and pervasive censorship. It likewise disadvantaged smaller book producers, provincial publishers, and the academic and broader community.

The French Revolutionary decrees of 1791 and 1793 replaced the idea of privilege with that of uniform statutory claims to literary property, based on the principle that “the most sacred, the most unassailable and the most personal of possessions is the fruit of a writer’s thought.” The subject matter of copyrights covered books, dramatic productions and the output of the “beaux arts” including designs and sculpture. Authors were required to deposit two copies of their books with the Bibliothèque Nationale or risk losing their copyright. Some observers felt that copyrights in France were the least protected of all property rights, since they were enforced with a care to protecting the public domain and social welfare. Although France is associated with the author’s rights approach to copyright and proclamations of the “droit d’auteur,” these ideas evolved slowly and hesitatingly, mainly in order to meet the self-interest of the various members of the book trade. During the ancien régime, the rhetoric of authors’ rights had been promoted by French owners of book privileges as a way of deflecting criticism of monopoly grants and of protecting their profits, and by their critics as a means of attacking the same monopolies and profits. This language was retained in the statutes after the Revolution, so the changes in interpretation and enforcement may not have been universally evident.

By the middle of the nineteenth century, French jurisprudence and philosophy tended to explicate copyrights in terms of rights of personality but the idea of the moral claim of authors to property rights was not incorporated in the law until early in the twentieth century. The droit d’auteur first appeared in a law of April 1910. In 1920 visual artists were granted a “droit de suite” or a claim to a portion of the revenues from resale of their works. Subsequent evolution of French copyright laws led to the recognition of the right of disclosure, the right of retraction, the right of attribution, and the right of integrity. These moral rights are (at least in theory) perpetual, inalienable, and thus can be bequeathed to the heirs of the author or artist, regardless of whether or not the work was sold to someone else. The self-interested rhetoric of the owners of monopoly privileges now fully emerged as the keystone of the “French system of literary property” that would shape international copyright laws in the twenty first century.

Copyright in England

England similarly experienced a period during which privileges were granted, such as a seven year grant from the Chancellor of Oxford University for an 1518 work. In 1557, the Worshipful Company of Stationers, a publishers’ guild, was founded on the authority of a royal charter and controlled the book trade for next one hundred and fifty years. This company created and controlled the right of their constituent members to make copies, so in effect their “copy right” was a private property right that existed in perpetuity, independently of state or statutory rights. Enforcement and regulation were carried out by the corporation itself through its Court of Assistants. The Stationers’ Company maintained a register of books, issued licenses, and sanctioned individuals who violated their regulations. Thus, in both England and France, copyright law began as a monopoly grant to benefit and regulate the printers’ guilds, and as a form of surveillance and censorship over public opinion on behalf of the Crown.

The English system of privileges was replaced in 1710 by a copyright statute (the “Statute of Anne” or “An Act for the Encouragement of Learning, by Vesting the Copies of Printed Books in the Authors or Purchasers of Such Copies, During the Times Therein Mentioned,” 1709-10, 8 Anne, ch. 19.) The statute was not directed toward the authors of books and their rights. Rather, its intent was to restrain the publishing industry and destroy its monopoly power. According to the law, the grant of copyright was available to anyone, not just to the Stationers. Instead of a perpetual right, the term was limited to fourteen years, with a right of renewal, after which the work would enter the public domain. The statute also permitted the importation of books in foreign languages.

Subsequent litigation and judicial interpretation added a new and fundamentally different dimension to copyright. In order to protect their perpetual copyright, publishers tried to promote the idea that copyright was based on the natural rights of authors or creative individuals and, as the agent of the author, those rights devolved to the publisher. If indeed copyrights derived from these inherent principles, they represented property that existed independently of statutory provisions and could be protected under common law. The booksellers engaged in a series of strategic litigation that culminated in their defeat in the landmark case, Donaldson v. Beckett [98 Eng. Rep. 257 (1774)]. The court ruled that authors had a common law right in their unpublished works, but on publication that right was extinguished by the statute, whose provisions determined the nature and scope of any copyright claims. This transition from publisher’s rights to statutory author’s rights implied that copyright had transmuted from a straightforward license to protect monopoly profits into an expanding property right whose boundaries would henceforth increase at the expense of the public domain.

Between 1735 and 1875 fourteen Acts of Parliament amended the copyright legislation. Copyrights extended to sheet music, maps, charts, books, sculptures, paintings, photographs, dramatic works and songs sung in a dramatic fashion, and lectures outside of educational institutions. Copyright owners had no remedies at law unless they complied with a number of stipulations which included registration, the payment of fees, the delivery of free copies of every edition to the British Museum (delinquents were fined), as well as complimentary copies for four libraries, including the Bodleian and Trinity College. The ubiquitous Stationers’ Company administered registration, and the registrar personally benefited from the monetary fees of 5 shillings when the book was registered and an equal amount for each assignment and each copy of an entry, along with one shilling for each entry searched. Foreigners could only obtain copyrights if they presented themselves in a part of the British Empire at the time of publication. The book had to be published in the United Kingdom, and prior publication in a foreign country – even in a British colony – was an obstacle to copyright protection.

The term of the copyright in books was for the longer of 42 years from publication or the lifetime of the author plus seven years, and after the death of the author a compulsory license could be issued to ensure that works of sufficient public benefit would be published. The “work for hire” doctrine was in force for books, reviews, newspapers, magazines and essays unless a distinct contractual clause specified that the copyright was to accrue to the author. Similarly, unauthorized use of a publication was permitted for the purposes of “fair use.” Only the copyright holder and his agents were allowed to import the protected works into Britain.

The British Commission that reported on the state of the copyright system in 1878 felt that the laws were “obscure, arbitrary and piecemeal” and were compounded by the confused state of the common law. The numerous uncoordinated laws that were simultaneously in force led to conflicts and unintended defects in the system. The report discussed but did not recommend an alternative to the grant of copyrights, in the form of a royalty system where “any person would be entitled to copy or republish the work on paying or securing to the owner a remuneration, taking the form of royalty or definite sum prescribed by law.” The main benefit would be to be public in the form of early access to cheap editions, whereas the main cost would be to the publishers whose risk and return would be negatively affected.

The Commission noted that the implications for the colonies were “anomalous and unsatisfactory.” The publishers in England practiced price discrimination, modifying the initial high prices for copyrighted material through discounts given to reading clubs, circulating libraries and the like, benefits which were not available in the colonies. In 1846 the Colonial Office acknowledged “the injurious effects produced upon our more distant colonists” and passed the Foreign Reprints Act in the following year. This allowed the colonies who adopted the terms of British copyright legislation to import cheap reprints of British copyrighted material with a tariff of 12.5 percent, the proceeds of which were to be remitted to the copyright owners. However, enforcement of the tariff seems to have been less than vigorous since, between 1866 and 1876 only £1155 was received from the 19 colonies who took advantage of the legislation (£1084 from Canada which benefited significantly from the American reprint trade). The Canadians argued that it was difficult to monitor imports, so it would be more effective to allow them to publish the reprints themselves and collect taxes for the benefit of the copyright owners. This proposal was rejected, but under the Canadian Copyright Act of 1875 British copyright owners could obtain Canadian copyrights for Canadian editions that were sold at much lower prices than in Britain or even in the United States.

The Commission made two recommendations. First, the bigger colonies with domestic publishing facilities should be allowed to reprint copyrighted material on payment of a license to be set by law. Second, the benefits to the smaller colonies of access to British literature should take precedence over lobbies to repeal the Foreign Reprints Act, which should be better enforced rather than removed entirely. Some had argued that the public interest required that Britain should allow the importation of cheap colonial reprints since the high prices of books were “altogether prohibitory to the great mass of the reading public” but the Commission felt that this should only be adopted with the consent of the copyright owner. They also devoted a great deal of attention to what was termed “The American Question” but took the “highest public ground” and recommended against retaliatory policies.

Copyright in the United States

Colonial Copyright

In the period before the Declaration of Independence individual American states recognized and promoted patenting activity, but copyright protection was not considered to be of equal importance, for a number of reasons. First, in a democracy the claims of the public and the wish to foster freedom of expression were paramount. Second, to a new colony, pragmatic concerns were likely of greater importance than the arts, and the more substantial literary works were imported. Markets were sufficiently narrow that an individual could saturate the market with a first run printing, and most local publishers produced ephemera such as newspapers, almanacs, and bills. Third, it was unclear that copyright protection was needed as an incentive for creativity, especially since a significant fraction of output was devoted to works such as medical treatises and religious tracts whose authors wished simply to maximize the number of readers, rather than the amount of income they received.

In 1783, Connecticut became the first state to approve an “Act for the encouragement of literature and genius” because “it is perfectly agreeable to the principles of natural equity and justice, that every author should be secured in receiving the profits that may arise from the sale of his works, and such security may encourage men of learning and genius to publish their writings; which may do honor to their country, and service to mankind.” Although this preamble might seem to strongly favor author’s rights, the statute also specified that books were to be offered at reasonable prices and in sufficient quantities, or else a compulsory license would issue.

Federal Copyright Grants

Despite their common source in the intellectual property clause of the U.S. Constitution, copyright policies provided a marked contrast to the patent system. According to Wheaton v. Peters, 33 U.S. 591, 684 (1834): “It has been argued at the bar, that as the promotion of the progress of science and the useful arts is here united in the same clause in the constitution, the rights of the authors and inventors were considered as standing on the same footing; but this, I think, is a non sequitur, for when congress came to execute this power by legislation, the subjects are kept distinct, and very different provisions are made respecting them.”

The earliest federal statute to protect the product of authors was approved on May 31 1790, “for the encouragement of learning, by securing the copies of maps, charts, and books to the authors and proprietors of such copies, during the times therein mentioned.” John Barry obtained the first federal copyright when he registered his spelling book in the District Court of Pennsylvania, and early grants reflected the same utilitarian character. Policy makers felt that copyright protection would serve to increase the flow of learning and information, and by encouraging publication would contribute to democratic principles of free speech. The diffusion of knowledge would also ensure broad-based access to the benefits of social and economic development. The copyright act required authors and proprietors to deposit a copy of the title of their work in the office of the district court in the area where they lived, for a nominal fee of sixty cents. Registration secured the right to print, publish and sell maps, charts and books for a term of fourteen years, with the possibility of an extension for another like term. Amendments to the original act extended protection to other works including musical compositions, plays and performances, engravings and photographs. Legislators refused to grant perpetual terms, but the length of protection was extended in the general revision of the laws in 1831, and 1909.

In the case of patents, the rights of inventors, whether domestic or foreign, were widely viewed as coincident with public welfare. In stark contrast, policymakers showed from the very beginning an acute sensitivity to trade-offs between the rights of authors (or publishers) and social welfare. The protections provided to authors under copyrights were as a result much more limited than those provided by the laws based on moral rights that were applied in many European countries. Of relevance here are stipulations regarding first sale, work for hire, and fair use. Under a moral rights-based system, an artist or his heirs can claim remedies if subsequent owners alter or distort the work in a way that allegedly injures the artist’s honor or reputation. According to the first sale doctrine, the copyright holder lost all rights after the work was sold. In the American system, if the copyright holder’s welfare were enhanced by nonmonetary concerns, these individualized concerns could be addressed and enforced through contract law, rather than through a generic federal statutory clause that would affect all property holders. Similarly, “work for hire” doctrines also repudiated the right of personality, in favor of facilitating market transactions. For example, in 1895 Thomas Donaldson filed a complaint that Carroll D. Wright’s editing of Donaldson’s report for the Census Bureau was “damaging and injurious to the plaintiff, and to his reputation” as a scholar. The court rejected his claim and ruled that as a paid employee he had no rights in the bulletin; to rule otherwise would create problems in situations where employees were hired to prepare data and statistics.

This difficult quest for balance between private and public good was most evident in the copyright doctrine of “fair use” that (unlike with patents) allowed unauthorized access to copyrighted works under certain conditions. Joseph Story ruled in [Folsom v. Marsh, 9 F. Cas. 342 (1841)]: “we must often, in deciding questions of this sort, look to the nature and objects of the selections made, the quantity and value of the materials used, and the degree in which the use may prejudice the sale, or diminish the profits, or supersede the objects, of the original work.” One of the striking features of the fair use doctrine is the extent to which property rights were defined in terms of market valuations, or the impact on sales and profits, as opposed to a clear holding of the exclusivity of property. Fair use doctrine thus illustrates the extent to which the early policy makers weighed the costs and benefits of private property rights against the rights of the public and the provisions for a democratic society. If copyrights were as strictly construed as patents, it would serve to reduce scholarship, prohibit public access for noncommercial purposes, increase transactions costs for potential users, and inhibit learning which the statutes were meant to promote.

Nevertheless, like other forms of intellectual property, the copyright system evolved to encompass improvements in technology and changes in the marketplace. Technological changes in nineteenth-century printing included the use of stereotyping which lowered the costs of reprints, improvements in paper making machinery, and the advent of steam powered printing presses. Graphic design also benefited from innovations, most notably the development of lithography and photography. The number of new products also expanded significantly, encompassing recorded music and moving pictures by the end of the nineteenth century; and commercial television, video recordings, audiotapes, and digital music in the twentieth century.

The subject matter, scope and duration of copyrights expanded over the course of the nineteenth century to include musical compositions, plays, engravings, sculpture, and photographs. By 1910 the original copyright holder was granted derivative rights such as to translations of literary works into other languages; to performances; and the rights to adapt musical works, among others. Congress also lengthened the term of copyright several times, although by 1890 the term of copyright protection in Greece and the United States were the most abbreviated in the world. New technologies stimulated change by creating new subjects for copyright protection, and by lowering the costs of infringement of copyrighted works. In Edison v. Lubin, 122 F. Cas. 240 (1903), the lower court rejected Edison’s copyright of moving pictures under the statutory category of photographs. This decision was overturned by the appellate court: “[Congress] must have recognized there would be change and advance in making photographs, just as there has been in making books, printing chromos, and other subjects of copyright protection.” Copyright enforcement was largely the concern of commercial interests, and not of the creative individual. The fraction of copyright plaintiffs who were authors (broadly defined) was initially quite low, and fell continuously during the nineteenth century. By 1900-1909, only 8.6 percent of all plaintiffs in copyright cases were the creators of the item that was the subject of the litigation. Instead, by the same period, the majority of parties bringing cases were publishers and other assignees of copyrights.

In 1909 Congress revised the copyright law and composers were given the right to make the first mechanical reproductions of their music. However, after the first recording, the statute permitted a compulsory license to issue for copyrighted musical compositions: that is to say, anyone could subsequently make their own recording of the composition on payment of a fee that was set by the statute at two cents per recording. In effect, the property right was transformed into a liability rule. The next major legislative change in 1976 similarly allowed compulsory licenses to issue for works that are broadcast on cable television. The prevalence of compulsory licenses for copyrighted material is worth noting for a number of reasons: they underline some of the statutory differences between patents and copyrights in the United States; they reflect economic reasons for such distinctions; and they are also the result of political compromises among the various interest groups that are affected.

Allied Rights

The debate about the scope of patents and copyrights often underestimates or ignores the importance of allied rights that are available through other forms of the law such as contract and unfair competition. A noticeable feature of the case law is the willingness of the judiciary in the nineteenth century to extend protection to noncopyrighted works under alternative doctrines in the common law. More than 10 percent of copyright cases dealt with issues of unfair competition, and 7.7 percent with contracts; a further 12 percent encompassed issues of right to privacy, trade secrets, and misappropriation. For instance, in Keene v. Wheatley et al., 14 F. Cas. 180 (1860), the plaintiff did not have a statutory copyright in the play that was infringed. However, she was awarded damages on the basis of her proprietary common law right in an unpublished work, and because the defendants had taken advantage of a breach of confidence by one of her former employees. Similarly, the courts offered protection against misappropriation of information, such as occurred when the defendants in Chamber of Commerce of Minneapolis v. Wells et al., 111 N.W. 157 (1907) surreptitiously obtained stock market information by peering in windows, eavesdropping, and spying.

Several other examples relate to the more traditional copyright subject of the book trade. E. P. Dutton & Company published a series of Christmas books which another publisher photographed, and offered as a series with similar appearance and style but at lower prices. Dutton claimed to have been injured by a loss of profits and a loss of reputation as a maker of fine books. The firm did not have copyrights in the series, but they essentially claimed a right in the “look and feel” of the books. The court agreed: “the decisive fact is that the defendants are unfairly and fraudulently attempting to trade upon the reputation which plaintiff has built up for its books. The right to injunctive relief in such a case is too firmly established to require the citation of authorities.” In a case that will resonate with academics, a surgery professor at the University of Pennsylvania was held to have a common law property right in the lectures he presented, and a student could not publish them without his permission. Titles could not be copyrighted, but were protected as trade marks and under unfair competition doctrines. In this way, in numerous lawsuits G. C. Merriam & Co, the original publishers of Webster’s Dictionary, restrained the actions of competitors who published the dictionary once the copyrights had expired.

International Copyrights in the United States

The U.S. was long a net importer of literary and artistic works, especially from England, which implied that recognition of foreign copyrights would have led to a net deficit in international royalty payments. The Copyright Act recognized this when it specified that “nothing in this act shall be construed to extend to prohibit the importation or vending, reprinting or publishing within the United States, of any map, chart, book or books … by any person not a citizen of the United States.” Thus, the statutes explicitly authorized Americans to take free advantage of the cultural output of other countries. As a result, it was alleged that American publishers “indiscriminately reprinted books by foreign authors without even the pretence of acknowledgement.” The tendency to reprint foreign works was encouraged by the existence of tariffs on imported books that ranged as high as 25 percent.

The United States stood out in contrast to countries such as France, where Louis Napoleon’s Decree of 1852 prohibited counterfeiting of both foreign and domestic works. Other countries which were affected by American piracy retaliated by refusing to recognize American copyrights. Despite the lobbying of numerous authors and celebrities on both sides of the Atlantic, the American copyright statutes did not allow for copyright protection of foreign works for fully one century. As a result, American publishers and producers freely pirated foreign literature, art, and drama.

Effects of Copyright Piracy

What were the effects of piracy? First, did the American industry suffer from cheaper foreign books being dumped on the domestic market? This does not seem to have been the case. After controlling for the type of work, the cost of the work, and other variables, the prices of American books were lower than prices of foreign books. American book prices may have been lower to reflect lower perceived quality or other factors that caused imperfect substitutability between foreign and local products. As might be expected, prices were not exogenously and arbitrarily fixed, but varied in accordance with a publisher’s estimation of market factors such as the degree of competition and the responsiveness of demand to determinants. The reading public appears to have gained from the lack of copyright, which increased access to the superior products of more developed markets in Europe, and in the long run this likely improved both the demand and supply of domestic science and literature.

Second, according to observers, professional authorship in the United States was discouraged because it was difficult to compete with established authors such as Scott, Dickens and Tennyson. Whether native authors were deterred by foreign competition would depend on the extent to which foreign works prevailed in the American market. Early in American history the majority of books were reprints of foreign titles. However, nonfiction titles written by foreigners were less likely to be substitutable for nonfiction written by Americans; consequently, the supply of nonfiction soon tended to be provided by native authors. From an early period grammars, readers, and juvenile texts were also written by Americans. Geology, geography, history and similar works would have to be adapted or completely rewritten to be appropriate for an American market which reduced their attractiveness as reprints. Thus, publishers of schoolbooks, medical volumes and other nonfiction did not feel that the reforms of 1891 were relevant to their undertakings. Academic and religious books are less likely to be written for monetary returns, and their authors probably benefited from the wider circulation that lack of international copyright encouraged. However, the writers of these works declined in importance relative to writers of fiction, a category which grew from 6.4 percent before 1830 to 26.4 percent by the 1870s.

On the other hand, foreign authors dominated the field of fiction for much of the century. One study estimates about fifty percent of all fiction best sellers in antebellum period were pirated from foreign works. In 1895 American authors accounted for two of the top ten best sellers but by 1910 nine of the top ten were written by Americans. This fall over time in the fraction of foreign authorship may have been due to a natural evolutionary process, as the development of the market for domestic literature over time encouraged specialization. The growth in fiction authors was associated with the increase in the number of books per author over the same period. Improvements in transportation and the increase in the academic population probably played a large role in enabling individuals who lived outside the major publishing centers to become writers despite the distance. As the market expanded, a larger fraction of writers could become professionals.

Although the lack of copyright protection may not have discouraged authors, this does not imply that intellectual property policy in this dimension had no costs. It is likely that the lack of foreign copyrights led to some misallocation of efforts or resources, such as in attempting to circumvent the rules. Authors changed their residence temporarily when books were about to be published in order to qualify for copyright. Others obtained copyrights by arranging to co-author with a foreign citizen. T. H. Huxley adopted this strategy, arranging to co-author with “a young Yankee friend … Otherwise the thing would be pillaged at once.” An American publisher suggested that Kipling should find “a hack writer, whose name would be of use simply on account of its carrying the copyright.” Harriet Beecher Stowe proposed a partnership with Elizabeth Gaskell, so they could “secure copyright mutually in our respective countries and divide the profits.”

It is widely acknowledged that copyrights in books tended to be the concern of publishers rather than of authors (although the two are naturally not independent of each other). As a result of lack of legal copyrights in foreign works, publishers raced to be first on the market with the “new” pirated books, and the industry experienced several decades of intense, if not quite “ruinous” competition. These were problems that publishers in England had faced before, in the market for books that were uncopyrighted, such as Shakespeare and Fielding. Their solution was to collude in the form of strictly regulated cartels or “printing congers.” The congers created divisible property in books that they traded, such as a one hundred and sixtieth share in Johnson’s Dictionary that was sold for £23 in 1805. Cooperation resulted in risk sharing and a greater ability to cover expenses. The unstable races in the United States similarly settled down during the 1840s to collusive standards that were termed “trade custom” or “courtesy of the trade.”

The industry achieved relative stability because the dominant firms cooperated in establishing synthetic property rights in foreign-authored books. American publishers made payments (termed “copyrights”) to foreign authors to secure early sheets, and other firms recognized their exclusive property in the “authorized reprint”. Advance payments to foreign authors not only served to ensure the coincidence of publishers’ and authors’ interests – they were also recognized by “reputable” publishers as “copyrights.” These exclusive rights were tradable, and enforced by threats of predatory pricing and retaliation. Such practices suggest that publishers were able to simulate the legal grant through private means.

However, private rights naturally did not confer property rights that could be enforced at law. The case of Sheldon v. Houghton 21 F. Cas 1239 (1865) illustrates that these rights were considered to be “very valuable, and is often made the subject of contracts, sales, and transfers, among booksellers and publishers.” The very fact that a firm would file a plea for the court to protect their claim indicates how vested a right it had become. The plaintiff argued that “such custom is a reasonable one, and tends to prevent injurious competition in business, and to the investment of capital in publishing enterprises that are of advantage to the reading public.” The courts rejected this claim, since synthetic rights differed from copyrights in the degree of security that was offered by the enforcement power of the courts. Nevertheless, these title-specific of rights exclusion decreased uncertainty, enabled publishers to recoup their fixed costs, and avoided the wasteful duplication of resources that would otherwise have occurred.

It was not until 1891 that the Chace Act granted copyright protection to selected foreign residents. Thus, after a century of lobbying by interested parties on both sides of the Atlantic, based on reasons that ranged from the economic to the moral, copyright laws only changed when the United States became more competitive in the international market for literary and artistic works. However, the act also included significant concessions to printers’ unions and printing establishments in the form of “manufacturing clauses.” First, a book had to be published in the U.S. before or at the same time as the publication date in its country of origin. Second, the work had to be printed here, or printed from type set in the United States or from plates made from type set in the United States. Copyright protection still depended on conformity with stipulations such as formal registration of the work. These clauses resulted in U.S. failure to qualify for admission to the international Berne Convention until 1988, more than one hundred years after the first Convention.

After the copyright reforms in 1891, both English and American authors were disappointed to find that the change in the law did not lead to significant gains. Foreign authors realized they may even have benefited from the lack of copyright protection in the United States. Despite the cartelization of publishing, competition for these synthetic copyrights ensured that foreign authors were able to obtain payments that American firms made to secure the right to be first on the market. It can also be argued that foreign authors were able to reap higher total returns from the expansion of the market through piracy. The lack of copyright protection may have functioned as a form of price discrimination, where the product was sold at a higher price in the developed country, and at a lower or zero price in the poorer country. Returns under such circumstances may have been higher for goods with demand externalities or network effects, such as “bestsellers” where consumer valuation of the book increased with the size of the market. For example, Charles Dickens, Anthony Trollope, and other foreign writers were able to gain considerable income from complementary lecture tours in the extensive United States market.

Harmonization of Copyright Laws

In view of the strong protection accorded to inventors under the U.S. patent system, to foreign observers its copyright policies appeared to be all the more reprehensible. The United States, the most liberal in its policies towards patentees, had led the movement for harmonization of patent laws. In marked contrast, throughout the history of the U.S. system, its copyright grants in general were more abridged than almost all other countries in the world. The term of copyright grants to American citizens was among the shortest in the world, the country applied the broadest interpretation of fair use doctrines, and the validity of the copyright depended on strict compliance with the requirements. U.S. failure to recognize the rights of foreign authors was also unique among the major industrial nations. Throughout the nineteenth century proposals to reform the law and to acknowledge foreign copyrights were repeatedly brought before Congress and rejected. Even the bill that finally recognized international copyrights almost failed, only passed at the last possible moment, and required longstanding exemptions in favor of workers and printing enterprises.

In a parallel fashion to the status of the United States in patent matters, France’s influence was evident in the subsequent evolution of international copyright laws. Other countries had long recognized the rights of foreign authors in national laws and bilateral treaties, but France stood out in its favorable treatment of domestic and foreign copyrights as “the foremost of all nations in the protection it accords to literary property.” This was especially true of its concessions to foreign authors and artists. For instance, France allowed copyrights to foreigners conditioned on manufacturing clauses in 1810, and granted foreign and domestic authors equal rights in 1852. In the following decade France entered into almost two dozen bilateral treaties, prompting a movement towards multilateral negotiations, such as the Congress on Literary and Artistic Property in 1858. The International Literary and Artistic Association, which the French novelist Victor Hugo helped to establish, conceived of and organized the Convention which first met in Berne in 1883.

The Berne Convention included a number of countries that wished to establish an “International Union for the Protection of Literary and Artistic Works.” The preamble declared their intent to “protect effectively, and in as uniform a manner as possible, the rights of authors over their literary and artistic works.” The actual Articles were more modest in scope, requiring national treatment of authors belonging to the Union and minimum protection for translation and public performance rights. The Convention authorized the establishment of a physical office in Switzerland, whose official language would be French. The rules were revised in 1908 to extend the duration of copyright and to include modern technologies. Perhaps the most significant aspect of the convention was not its specific provisions, but the underlying property rights philosophy which was decidedly from the natural rights school. Berne abolished compliance with formalities as a prerequisite for copyright protection since the creative act itself was regarded as the source of the property right. This measure had far-reaching consequences, because it implied that copyright was now the default, whereas additions to the public domain would have to be achieved through affirmative actions and by means of specific limited exemptions. In 1928 the Berne Convention followed the French precedent and acknowledged the moral rights of authors and artists.

Unlike its leadership in patent conventions, the United States declined an invitation to the pivotal copyright conference in Berne in 1883; it attended but refused to sign the 1886 agreement of the Berne Convention. Instead, the United States pursued international copyright policies in the context of the weaker Universal Copyright Convention (UCC), which was adopted in 1952 and formalized in 1955 as a complementary agreement to the Berne Convention. The UCC membership included many developing countries that did not wish to comply with the Berne Convention because they viewed its provisions as overly favorable to the developed world. The United States was among the last wave of entrants into the Berne Convention when it finally joined in 1988. In order to do so it complied by removing prerequisites for copyright protection such as registration, and also lengthened the term of copyrights. However, it still has not introduced federal legislation in accordance with Article 6bis, which declares the moral rights of authors “independently of the author’s economic rights, and even after the transfer of the said rights.” Similarly, individual countries continue to differ in the extent to which multilateral provisions governed domestic legislation and practices.

The quest for harmonization of intellectual property laws resulted in a “race to the top,” directed by the efforts and self interest of the countries which had the strongest property rights. The movement to harmonize patents was driven by American efforts to ensure that its extraordinary patenting activity was remunerated beyond as well as within its borders. At the same time, the United States ignored international conventions to unify copyright legislation. Nevertheless, the harmonization of copyright laws proceeded, promoted by France and other civil law regimes which urged stronger protection for authors based on their “natural rights” although at the same time they infringed on the rights of foreign inventors. The net result was that international pressure was applied to developing countries in the twentieth century to establish strong patents and strong copyrights, although no individual developed country had adhered to both concepts simultaneously during their own early growth phase. This occurred even though theoretical models did not offer persuasive support for intellectual property harmonization, and indeed suggested that uniform policies might be detrimental even to some developed countries and to overall global welfare.

Conclusion

The past three centuries stand out in terms of the diversity across nations in intellectual property institutions, but the nineteenth century saw the origins of the movement towards the “harmonization” of laws that at present dominates global debates. Among the now-developed countries, the United States stood out for its conviction that broad access to intellectual property rules and standards was key to achieving economic development. Europeans were less concerned about enhancing mass literacy and public education, and viewed copyright owners as inherently meritorious and deserving of strong protection. European copyright regimes thus evolved in the direction of author’s rights, while the United States lagged behind the rest of the world in terms of both domestic and foreign copyright protection.

By design, American statutes differentiated between patents and copyrights in ways that seemed warranted if the objective was to increase social welfare. The patent system early on discriminated between nonresident and domestic inventors, but within a few decades changed to protect the right of any inventor who filed for an American patent regardless of nationality. The copyright statutes, in contrast, openly encouraged piracy of foreign goods on an astonishing scale for one hundred years, in defiance of the recriminations and pressures exerted by other countries. The American patent system required an initial search and examination that ensured the patentee was the “first and true” creator of the invention in the world, whereas copyrights were granted through mere registration. Patents were based on the assumption of novelty and held invalid if this assumption was violated, whereas essentially similar but independent creation was copyrightable. Copyright holders were granted the right to derivative works, whereas the patent holder was not. Unauthorized use of patented inventions was prohibited, whereas “fair use” of copyrighted material was permissible if certain conditions were met. Patented inventions involved greater initial investments, effort, and novelty than copyrighted products and tended to be more responsive to material incentives; whereas in many cases cultural goods would still be produced or only slightly reduced in the absence of such incentives. Fair use was not allowed in the case of patents because the disincentive effect was likely to be higher, while the costs of negotiation between the patentee and the more narrow market of potential users would generally be lower. If copyrights were as strongly enforced as patents it would benefit publishers and a small literary elite at the cost of social investments in learning and education.

The United States created a utilitarian market-based model of intellectual property grants which created incentives for invention, but always with the primary objective of increasing social welfare and protecting the public domain. The checks and balances of interest group lobbies, the legislature and the judiciary worked effectively as long as each institution was relatively well-matched in terms of size and influence. However, a number of legal and economic scholars are increasingly concerned that the political influence of corporate interests, the vast number of uncoordinated users over whom the social costs are spread, and international harmonization of laws have upset these counterchecks, leading to over-enforcement at both the private and public levels.

International harmonization with European doctrines introduced significant distortions in the fundamental principles of American copyright and its democratic provisions. One of the most significant of these changes was also one of the least debated: compliance with the precepts of the Berne Convention accorded automatic copyright protection to all creations on their fixation in tangible form. This rule reversed the relationship between copyright and the public domain that the U.S. Constitution stipulated. According to original U.S. copyright doctrines, the public domain was the default, and copyright merely comprised a limited exemption to the public domain; after the alignment with Berne, copyright became the default, and the rights of the public and of the public domain now merely comprise a limited exception to the primacy of copyright. The pervasive uncertainty that characterizes the intellectual property arena today leads risk-averse individuals and educational institutions to err on the side of abandoning their right to free access rather than invite potential challenges and costly litigation. A number of commentators are equally concerned about other dimensions of the globalization of intellectual property rights, such as the movement to emulate European grants of property rights in databases, which has the potential to inhibit diffusion and learning.

Copyright law and policy has always altered and been altered by social, economic and technological changes, in the United States and elsewhere. However, the one constant feature across the centuries is that copyright protection involves crucial political questions to a far greater extent than its economic implications.

Additional Readings

Economic History

B. Zorina Khan. The Democratization of Invention: Patents and Copyrights in American Economic Development, 1790-1920. New York: Cambridge University Press, 2005.

Law and Economics

Besen, Stanley, and L. Raskind. “An Introduction to the Law and Economics of Intellectual Property.” Journal of Economic Perspectives 5 (1991): 3-27.

Economic history lost one of its best and brightest with Ken Sokoloff?s death in May 2007. To celebrate and commemorate his contributions to economics, Dora Costa and Naomi Lamoreaux collected an impressive and diverse group of essays contributed by Ken?s friends, colleagues, coauthors, and classmates. Ken?s interests were wide-ranging ? he wrote on early industrialization and heights and health, but his signal contributions concerned invention and innovation, as well as the complex connections between geography, institutions and long-run economic growth. Fittingly, the essays are equally wide ranging.

The first article is an essay Ken was working on with Stan Engerman and advances the initial conditions-geography-institutions approach explored in their earlier research. The central argument is that differences in initial conditions between North America and Central and South America set those regions on markedly different social, economic and political trajectories. With its relative shortage of indigenous labor, early settlers recognized that North America would prosper only through European settlement and they adopted institutions in which new arrivals were welcomed (eventually) into the polity and might, with good fortune and hard work, rise in society. Blessed with an abundance of indigenous workers, the earliest settlers in South and Central America adopted institutions that discouraged European immigration by restricting economic and political privilege. Moreover, the nature of staple crop production pushed the returns to unskilled labor so low that few Europeans came. The argument, briefly stated, is that early inequality begat later inequality through endogenously arising institutions that favored the few, the elite.

Sokoloff and Engerman?s research raises fundamental questions: Are institutions exogenously determined by idiosyncratic events, such as the arrival of British rather than Spanish colonizers, as the legal origins approach posits?[1]? Are institutions, once established, persistent, as the colonial origins approach contends?[2]? Or, are institutions endogenous to geographies as societies struggle with how best to deal with the challenges of environments, technologies, and factor endowments? Sokoloff and Engerman are clearly in the endogenous institutions camp.

It is fitting, then, that the next two articles take on the exogeneity/endogeneity debate from alternative perspectives. Camilo Garcia-Jimeno and James A. Robinson explore the long-run implications of Frederick Jackson Turner?s thesis that the American frontier shaped its egalitarian representative democracy. Garcia-Jimeno and Robinson recognize that the U.S. was not the only New World country with a frontier and offer the ?conditional frontier hypothesis,? which posits that the consequences of the frontier are conditional on the existing political equilibrium when settlement of the frontier commences. They consider 21 New World countries and, from a series of regressions, conclude that if political institutions were bad at the outset (which they define as 1850) the existence of a frontier may have made them worse. The oligarchs divvied up the frontier among themselves, which further entrenched their economic and political power. Exogenous institutions rule.

Or do they? Stephen Haber next explores banking and finance in three countries ? the U.S., Mexico and Brazil ? but starts from a very different, very Sokoloff-ian (if I may) perspective. For Haber, as for Sokoloff, the task facing the economic historian interested in institutions involves tracing the many and complex ways in which economic and political power becomes embedded in institutions, how those institutions influence the formation of competing coalitions, and how competition between them either entrenches or alters the original institutions. Pursuing these connections is, Haber (p. 90) argues, ?a task better suited to historical narratives than to econometric hypothesis testing.? What connects banking in these three countries is that the elite used their existing power to rent seek ? to elicit government sanction of limited entry and privileged monopoly. What separated the three countries was that rent seeking efforts largely failed in the U.S. If Jackson?s war on the Second Bank was emblematic of anything it was that U.S. populists had little tolerance for government-sanctioned economic privilege. Haber doesn?t, and I doubt that Ken would, attribute the Jacksonian attitude to an accident of history. It was organically, indelibly American.

Joel Mokyr summarizes Ken?s approach to his other great intellectual passion: invention and innovation. Innovation was the consequence of purposive, rational behavior. Inventors, at least at some level, were motivated and directed by costs and benefits. Ken also recognized that inventive activity was sensitive to the institutions that generated markets that defined the rewards for innovation. Zorina Khan takes these issues head on in her analysis of patents versus prizes. At the risk of gross oversimplification, the English and the French preferred prizes for inventions believing that what motivated inventive genius was the esteem of one?s peers. Americans proceeded under the pragmatic and republican belief that profits motivated and markets would ?allow society to better realize its potential? (p. 207). Prizes were subject to momentary whims, were idiosyncratic, difficult to predict, and therefore less useful in pushing out the frontiers of useful knowledge. Markets elicited more innovation, at least as markets were organized in America.

The second article in the volume to which Ken directly contributed is coauthored with Naomi Lamoreaux and Dhanoos Sutthiphisal. They, too, explore the connection between markets and inventions in the ?new economy? of the 1920s. They argue that the rapid expansion of equity markets afforded many small enterprises on the technological frontier access to finance that was unavailable a generation earlier. Big firms dominated patenting in the Northeast. In what became the Rust Belt, small, entrepreneurial firms with new products or processes issued equities or attracted the venture capital necessary for them to bring their products to market. Markets influence innovation in all kinds of direct and indirect ways.

The constraints of a book review, unfortunately, preclude a discussion of the many other very good essays in the volume but which venture so far afield that they are not readily condensed. They are all worth reading; I was particularly fascinated by Dan Bogart and John Majewski?s article comparing the British and American transportation revolutions, and touched by Manuel Trajtenberg?s reflections on Ken as scholar and friend.

On a personal note, I am a beneficiary of Ken?s gentle but firm guidance. It was inadvertently revealed to me that Ken was one of the anonymous reviewers of my State Banking in Early America (2003). While the manuscript was well outside his research interests, he offered several insightful comments, one of which forced me to think more deeply about a central idea. My book is better for Ken?s advice. Many of the chapters included in this volume are undoubtedly better for Ken?s prodding, pushing and provocation. He is missed.

Howard Bodenhorn is currently studying early corporate governance in the United States.???

Copyright (c) 2013 by EH.Net. All rights reserved. This work may be copied for non-profit educational uses if proper credit is given to the author and the list. For other permission, please contact the EH.Net Administrator (administrator@eh.net). Published by EH.Net (August 2013). All EH.Net reviews are archived at http://www.eh.net/BookReview

Reviewed for EH.Net by Isaac Nakhimovsky, Faculty of History, University of Cambridge.

This prize-winning book presents an important new account of the emergence of political economy as a discipline in the eighteenth century. It mounts a strong challenge to histories of economic thought that limit their attention to highly abstract treatises on political economy, particularly those that seek to trace the emergence of principled arguments for free trade. Reinert shows that intensifying competition between states formed the backdrop for eighteenth-century reflection on political economy. The book draws attention to a wide-ranging practical literature that focused on the imperatives of economic survival in a hostile international environment. The claim Reinert develops is that the process by which such practical economic knowledge began to be formalized and institutionalized cannot be understood without appreciating the significance of interstate rivalry. Reinert shows that this process can be illuminated by tracking eighteenth-century translations of publications on political economy. As he vividly shows, such translations were much more than mere renderings in foreign languages: understood as creative vehicles for the transmission, evaluation, and appropriation of economic knowledge, these translations themselves represent an important facet of international competition. They were among the means by which France sought to catch up to Britain, and by which lesser powers across Europe sought to avoid becoming the victims of economic imperialism. The formalization and institutionalization of political economy, Reinert?s book suggests, took place in this politically charged and mediated fashion.

An expansive introductory chapter illustrates the potential of this approach to the history of political economy through a statistical analysis of translations in the magnificent Kress Collection (covering economic literature before 1850) at the Baker Library of Harvard Business School, where Reinert is an Assistant Professor. The heart of the book, however, is an elegantly framed narrative revealing how, over the course of the eighteenth century, John Cary?s 1695 Essay on the State of England was thoroughly transformed by its French, Italian, and German translators. The 1745 English edition of Cary?s slim essay became a two volume French treatise published in 1755 by Georges-Marie Butel-Dumont, an associate of Vincent de Gournay; this in turn became a three volume work by the famed Neapolitan professor Antonio Genovesi in 1757-58; and finally, elements of all three of these editions were recombined in truncated form in a German edition of 1788, prepared at the instigation of a former Danish official by a Saxon political economist named Christian August Wichmann. As Reinert shows to great effect, Cary?s essay was vastly expanded and systematized over the course of its ?grand tour? to Paris, Naples and Leipzig, and in the process what began as a Bristol merchant?s ?primer of economic imperialism? was transformed into a ?general guidebook for escaping de facto colonial dependencies? (p. 203).

Cary?s essay, long cited as a classic statement of mercantilism, is revealed by Reinert to be a practical guide for how England could secure its urban manufacturing base as the engine of its economic development. Cary?s aim was to explain how England could remain at war with France without destroying the foundations of its wealth. Resisting the threat of Catholic absolutism, according to Cary, required England to have an economy powered by export-oriented manufacturing, even if this meant brutally suppressing the industrialization of potential low-wage competitors in Ireland as well as on the continent. Cary?s debate over Ireland with William Molyneux already prompted him to attempt giving his essay a more scientific cast. However, this process began in earnest when Butel-Dumont injected his revised and updated version of Cary?s essay into 1750s French debates about how to respond to England?s economic success. In the spirit of Gournay?s project to build up public knowledge of political economy in France, Butel-Dumont provided the essay with an impressive new bibliographic apparatus, but the most ambitious conceptual transformation of Cary?s essay was undertaken in Naples. From Genovesi?s perspective, Cary?s Anglican republican vision of an ?honest hive? was indistinguishable from Mandevillian atheism. It pointed to a historical model of cyclical decline and fall, which would condemn Naples to becoming an English colonial dependency. For Genovesi, avoiding this result required reworking Cary?s theoretical starting point and equipping his conjectural history of society with a stronger providential purpose. The final reformatting of Cary?s essay by Wichmann was intended for a Cameralist audience, whom Reinert compellingly places in a new light, describing Cameralism as fundamentally concerned with the problem of how to respond to the rise of maritime empires despite not having access to their superior imperial technologies.

Reinert draws two far-reaching conclusions from this impressively erudite investigation into the fate of Cary?s essay. The first has to do with the importance of manufacturing for England?s rise, which Reinert develops into a full-throated attack on ?the equality assumption? of neo-classical economics. Against James Buchanan?s assumption of ?constant returns to scale of production over all ranges of output,? (p. 82) Reinert claims that manufacturing industry enjoyed increasing returns to scale that could not be matched by agriculture. He traces this insight back to a pioneering early-seventeenth-century treatise by Antonio Serra that Genovesi later studied and that Reinert has recently translated into English. Cary?s eighteenth-century translators had no doubt that England?s great success was the product of an ?exceedingly conscious policy? favoring industrialization (p. 202). To forego such a policy was to submit to a fate of colonial dependency; perhaps the most provocative suggestion in Reinert?s book is that the rise of free-trade doctrines and their subsequent canonization can be attributed to English efforts to suppress foreign competition. Reinert?s fundamental point is that a history of doctrines of free trade yields at best na?ve dogmas and may even serve as a mask for economic imperialism. A more realistic political economy for our own times, in his view, requires a more realistic historical vision.

At the same time, Reinert draws out a second major insight from his history of Cary?s essay: all of Cary?s translators strove to purge his essay of what they regarded as his toxic variety of patriotism. Cary had equated English prosperity with the defeat and impoverishment of its rivals. His translators sought to substitute this ?jealousy of trade? with a more cosmopolitan vision that allowed for the possibility of ?emulation? or ?noble competition,? but without resorting to an agrarian utopianism. In eighteenth-century terms, they were for Colbertism without Machiavellism (p. 176): they entertained a vision of how a world of competitively industrializing states could be stabilized. In addition to mounting a powerful realist critique of free trade dogma, then, Reinert also advances recent reinterpretations of Enlightenment optimism in terms of a search for non-lethal forms of competition, and opens up a fascinating new prospect on the development of the discipline of political economy. His account goes a long way toward explaining why it was that the transformation of English practical economic experience into a systematic theory of political economy initially took place not in England itself, but in Ireland, Scotland, and continental Europe.

Copyright (c) 2013 by EH.Net. All rights reserved. This work may be copied for non-profit educational uses if proper credit is given to the author and the list. For other permission, please contact the EH.Net Administrator (administrator@eh.net). Published by EH.Net (July 2013). All EH.Net reviews are archived at http://www.eh.net/BookReview

Reviewed for EH.Net by John Munro, Department of Economics, University of Toronto.

Embracing a most impressive range of research, cogently organized, penetrating in its analysis of all aspects of the medieval English economy related to money, and elegant in its prose, Bolton?s Money in the Medieval English Economy: 973-1489 is one of the most important books published in English medieval economic history during the past two decades.? Indeed, I do not know of any other comparable and equally comprehensive study of English medieval monetary history. The book is cast into two unequal parts.? Part I (pp. 3-86) is theoretical, beginning with the Fisher Identity and the relationships between money, population, and prices in the medieval economy, followed by uniformly excellent chapters on the roles of money in a developing market economy: in terms of? bullion supplies, coinage, and credit instruments.? The longer Part II (pp.? 87-309), analyses the changes in coinage and other forms of money, and then in more detail the changing roles of money in the actual economy, sector by sector, over three distinct eras: 973-1158, 1158-1351, and 1351-1489.??? This section thus begins with the monetary reforms of Edgar of Mercia, first to be crowned and remain king of England, in 973; and it ends with Henry VII?s issue of the first gold sovereign coin, representing the value of one pound sterling, in October 1489 (the shilling came later).? A far more logical end-point would have been the onset of Henry VIII?s Great Debasement in 1542-44, as in Martin Allen?s recent, magisterial Mints and Money in Medieval England (2012), to which Bolton acknowledges his great indebtedness. Manchester University Press?s severe space limitations evidently prevented Bolton from extending his study beyond 1489, and also from including his 25-page bibliography, now available only online (URL on p.? 310).??? Beyond the general objectives just outlined, Bolton?s book has two other major goals.? The first is achieved with great success: to prove, in chapters 6 and 7, that England did not acquire a fully-developed money economy until the era from 1158 to 1351, i.e., up to the onset of the Black Death.? In his fully justifiable view, a money economy essentially meant a well-functioning market economy, one that required not only a considerable expansion in the circulating coinage but also rapid population growth and the concomitant development of towns and villages with urban and regional fairs, the establishment of effective forms of royal taxation, the development of the requisite commercial, financial and legal institutions, especially those needed for various forms of credit; and for the latter, the spread of both literacy and numeracy.? He demonstrates that, while population growth from 1086 (Domesday Book) to 1300 at least doubled and may have tripled (from 2.0/2.5 million to 5.0/6.0 million), the money supply expanded by 27 to 40 fold: from ?25,000/?37,500 to more than ?1.0 million ? most of that from the 1220s, though attributing the major increases in coinage to the Central European silver mining booms of ca. 1160 to ca. 1230.? He cites Mayhew?s estimates (2004) that per capita GDP rose from ?0.18 in 1086 to ?0.78 in 1300 (and to ?1.52 in 1470: Table 9.2, p. 295). Depending on sources,? methodology, and population estimates, he contends that per capita supplies of silver coin rose from 3.2d/6.0d in 1042-1066 to 65.5d/101.3d in 1310 (Table 2.2, pp. 25-27).? Thereafter, the introduction of gold coinages (from 1343-51) created significant problems for both our estimates of money supplies and the well-being of the English domestic economy, especially since the English government consistently and seriously overvalued gold to the severe detriment of silver coinage supplies (in effect, England exported silver to acquire gold), given that silver coin was the chief mechanism for transacting domestic trade, wages, and other such payments.??? That problem, however, leads us to his second goal, for which he is much less successful: to refute the current ?monetarist? views that later fourteenth- and fifteenth-century England experienced severe monetary scarcities (whether seen in terms of stocks or flows), most especially in silver coin supplies.? A disclaimer is in order: I am evidently one of those so-called monetarists under attack.? The tenor of the book becomes most evident in his statement (p. 75) that: ?It [the money supply] was not the sole determining factor [of price levels] as monetarist historians argue.?? I do not know of anyone who now does so.? That negative viewpoint may be deduced from his lengthy discussion, in his opening chapter, of the well-known and much abused Fisher Identity: M.V = P.T.? Thus, if one accepts the view that changes in V (velocity) and T (volume of transactions) cancel each other out, one might deduce that the price level P ? usually measured by the Consumer Price Index (CPI) ? is directly and proportionately a function of changes in M.?? But, even if some historians still use this antiquated formula, few if any economists do so, preferring? the modernized version in the form M.V = P.y (the occasionally-used equation M.V = GNP is unacceptable as an analytical tool). In this version, y, representing real net national income (or output), thus replaces the completely unmeasurable T; and V thus becomes the income velocity of high-powered money (however defined). Most economists now prefer even more to use the Cambridge ?cash balances? approach, with a demand-for-money equation: M = k.P.y, in which M, P, and y remain the same, while k represents that proportion of national income that the public collectively chooses to hold in non-earning real cash balances, according to determinants of liquidity preference, so that k is often sensitive to changes in interest rates.? Mathematically k is the reciprocal of V.

As may be deduced from either (revised) formula, an expansion in M may have been offset by some decline in V (with a lesser need to economize on coin use) and thus by some increase in k, and also by an increase in y:? especially if an increased M led to a decline in interest rates (with no changes in liquidity preference) and to a greater stimulus for investment and trade, so that P would have risen less than proportionately, if at all.? But the converse was not necessarily true, for the various forces contracting monetary stocks may also have constricted monetary flows: i.e., also reducing V and thereby increasing k.? These revised formulae clearly demonstrate that any analysis of changes in the price levels requires a detailed understanding of changes in both money stocks and money flows (especially liquidity preferences) but also changes in the real economy, as represented by y:? i.e., changes in population, technology, economic organizations, real capital investments, etc.? In my recent publications involving coinage debasements, I have sought to prove that in late-medieval and early-modern Europe, increases in M never resulted in proportional increases in the price level, even during Henry VIII?s Great Debasement (Munro 2011, 2012a, 2012b). None of this constitutes the supposed ?monetarism? that Bolton portrays, except to indicate that ?money matters? (a proposition that Bolton admittedly never denies).??? Bolton?s specific goal, in the final two chapters, 8 and 9, is to prove that increases in the supply and use of various credit instruments fully offset the two supposed ?bullion famines?: those from ca. 1375 to ca. 1420 and from ca. 1440 to ca. 1480.? Indeed, his focus on the expanding role of credit allows him fully to accept the nature and extent of these two ?bullion famines? as portrayed by so-called ?monetarists,? in contrast to the published views of the current group of ?anti-monetarist? historians (such as Sussman 1990, 1993, 1995, 1998, 2003).? He thus accepts the three prevailing theses to explain that coinage scarcity: a severe decline in outputs of European silver and gold mines; the disruptions in the trans-Saharan African gold trade to the Mediterranean; and increased bullion outflows to the East, particularly for purchases of Asian spices and other luxury goods.? But this third thesis seems inconsistent with his view that late-medieval England always enjoyed a surplus in its balance of payments with the continent. I myself am far from convinced that any payments deficit with the East, so chronic from Roman times, became proportionately worse during the later-Middle Ages, especially because the specific evidence adduced in favor of this thesis (from Ashtor 1971, 1983) comes from the 1490s, when the Central European mining boom, having commenced in the 1460s (peaking in the 1530s) was supplying vast new quantities of silver to promote increased Venetian trade with the Levant (Munro 2003a).? The more significant of these factors, therefore, may have been the reduction in European inflows of African gold, from the 1370s: a trade that the Portuguese later sought to restore, from the 1440s, and with considerable success from the 1470s.??? What Bolton neglects to consider as a major factor in these ?bullion famines? is changes in Cambridge k (and thus in V): i.e., an increased liquidity preference in the form of hoarding ? not by burying precious metals in the ground but by converting them into plate and jewelry, readily changeable back to coin, in times of war-induced taxation.? The one (other) historian who has given such emphasis to changes in liquidity preference and hoarding (?thesaurisation?), as a reaction to general economic pessimism and risk aversion in times of chronic plague, other forms of depopulation, economic contraction and periodic depressions, is Peter Spufford (1988); but Spufford still places greater emphasis on the roles of the European mining slump and bullion outflows to the East.

Bolton obviously does not wish to entertain the Spufford thesis ? which necessarily implies a decrease in the income velocity of money ? because he seeks to show that an increased use of credit fully offset the bullion famines by increasing either V or M or both.? In this debate, on the role of credit, his chief opponent is Pamela Nightingale (1990, 1997, 2004, 2010), and indeed the two have continued this debate is recent issues of the British Numismatic Journal (2011, 2013).? I continue to support Nightingale.? That might seem obvious for one accused of being a ?monetarist,? so that readers of this review must judge for themselves by a careful examination of their respective publications (and the others cited here).? In my view, Bolton fails to refute or contradict Nightingale?s two major propositions.? The first, and most important, is that the supply of credit remained essentially a function of the coined money supply, because most (if not all) credit transactions depended on the use of coin, and especially on the creditor?s confidence of being fully repaid in coin:? so that credit generally expanded with increases in the coined money supply and conversely contracted with any decline in the supply or circulation of coined money, often disproportionately.? On this important issue, Nightingale receives full support from many other monetary historians: Peter Spufford (1988), Nicholas Mayhew (1974, 1987, 1995, 2004), Reinhold Mueller (1984: for Italy), Frank Spooner (1972: for France), and most recently (if less strongly) Chris Briggs (for England: 2008, 2009).? Nightingale?s? second proposition, also endorsed by most of these historians, is that the wide variety of credit instruments used in late-medieval England were not yet negotiable, and thus, while affecting velocity (V), they did could not and did not add to the money supply (M) ? though the differences between the two may here be moot.? To be sure, many of these credit instruments were, and long had been, assignable ? transferable to third parties.? But as Eric Kerridge (1988) ? whom Bolton cites for other purposes ? long ago stressed: ?transferability is not negotiability,? a point that Michael Postan had also earlier made (1928, 1930), despite Bolton?s assertions to the contrary. The fully developed legal institutions required for secure negotiability of commercial bills, in protecting the full rights of assignees and bearers to claim and enforce payment on redemption, were first established in the Habsburg Netherlands by imperial legislation enacted in 1537 and 1541, as Herman Van der Wee has clearly demonstrated (1963, 1967, 1975, 2000),? Not until the early seventeenth century do we find comparable full-fledged English acceptance of negotiability and no national legislation until the Promissory Notes Act of 3 & 4 Anne c. 8 (1704).??? Equally essential for full negotiability was the legal acceptance of discounting, a problem related to the issue of usury, given short shrift not only by Bolton but also by Nightingale and most other financial historians (except, notably, De Roover 1967, also in Kirschner 1974).? To be sure, we may fairly assume that many medieval creditors did disguise interest in a loan by increasing the amount stipulated for repayment; but disguising such implicit interest was far more difficult to achieve in discounting (selling a bill for less than face value before redemption).? As Van der Wee has also demonstrated for the Habsburg Netherlands, discounting, along with multiple transfers by endorsement, spread only after an imperial ordinance, issued in October 1540, explicitly permitted interest payments on commercial loans up to 12%.? He also demonstrated that nominal interest rates in the Netherlands dropped sharply in this era, by almost half: from 20.5% in 1511-15 to 11.0% in 1566-70; real rates dropped even further with the inflation of the Price Revolution.? Similarly, according Norman Jones (1989), an even sharper fall in English interest rates on commercial bills took place after Elizabeth I, in 1571, restored her father?s abortive statute (1545) permitting interest payments up to 10%: from about 30% in the 1560s to 10% by 1600, with further declines in the seventeenth century, to about 5% (see also Homer and Sylla 1997, pp. 89-143; Munro 2012c).? Bolton has also not taken account of the significantly increased restrictions on the use of credit in fifteenth century England, from both anti-usury and bullionist legislation, and also the prevailing social attitudes that remained deeply imbedded until the early Stuart era. As Lawrence Stone (1965) so aptly commented on Elizabethan England: ?Money will never become freely or cheaply available in a society which nourishes a strong moral prejudice against the taking of any interest at all. ? If usury on any terms, however reasonable, is thought to be a discreditable business, men will tend to shun it, and the few who practise it will demand a high return for being generally regarded as moral lepers.???? If we were to accept, instead, Bolton?s contentions that an increased use of credit fully offset the coined money scarcity evident in the two bullion famines, then we would then be hard pressed to explain the sharp deflation of these two periods.? Bolton evidently sees no need to do so, for his book, most surprisingly, contains no tables or graphs on the price level (CPI); he provides only one price graph, on relative prices for just wheat and oxen, from 1160 to 1350 (p. 183).? Demographic decline cannot itself explain the periods of deflation (apart from its possible impact on V).? For note that the Black Death (1348-49), quickly reducing population by about 40%, was followed by three decades of rampant inflation: when the Phelps Brown and Hopkins CPI (1451-75 = 100) rose from a quinquennial mean of 85.53 in 1341-45 to one of 136.40 in 1366-70, falling slightly to 127.35 in 1371-75.? Thereafter, the CPI fell to a low of 103.70 in 1421-25, for an overall decline of 23.94%, despite the 16.67% silver debasement of 1411-12.? Rising thereafter to a peak of 124.22 in 1436-40, the CPI fell by 25.40 % during the second ?bullion famine?: to a nadir of 92.667 in 1476-80, again despite the 20.0% silver debasement of 1464.? Recent alternative historical consumer prices indexes ? those by Robert Allen (2001) and Gregory Clark (2004, 2007), neither cited by Bolton ? show the same patterns of inflation and deflation demonstrated in the older Phelps Brown and Hopkins Composite Price index (1956, 1981: revised by Munro).??? Bolton consequently does not take full account of the negative economic consequences of deflation.? If all relative prices had moved together in tandem, with proportional changes, then neither deflation nor inflation would matter. But price changes have never done so, especially factor prices in relation to commodity prices.? In general, deflation raises the burden of factor costs for borrowers and entrepreneurs, while inflation reduces that cost burden.? The most familiar such phenomenon is downward nominal-wage stickiness ? so widespread throughout Western Europe, unaffected by demographic factors, and persistent in England itself until 1920 (Smith 1776/1937; Phelps Brown and Hopkins 1955/1981; Munro 2003b).? But nominal interest rates and land rents were generally also sticky in this era, especially when defined by contracts, though for much shorter periods.? Thus all these real factor costs rose, at least in the short run, with the fall in the Consumer Price Index. If creditors were more reluctant to lend in times of monetary scarcity and depression, for fear of non-payment, debtors were also reluctant to borrow more in facing prospects of higher real costs in payments of both interest and the principal.? For both creditors and debtors that reluctance, in especially the mid fifteenth century, may have been due as much to the adverse circumstances of the commercial depressions that accompanied that bullion ?famine? and deflation (Hatcher 1996; Nightingale 1997; Bois 2000).??? A final problem, and one that pervades much of the book, concerns the proper distinctions between bullion, coinage, and moneys-of-account, and the closely related problem of coin debasements.? Bolton ought to have followed the model set forth long ago by Sir Albert Feavearyear (1931/1963), whose absence from the bibliography is astonishing.? By this model, silver and gold coins, bearing the official stamp of the ruler, generally circulate by tale (official face value), commanding an agio or premium over bullion.? That agio represents the sum of the minting costs of brassage (for the mint-master) and seigniorage (a tax for the ruler), added to the mint?s bullion price; but also, for the public, it represents their savings on transaction costs in not having to weigh the coins and assay their proper fineness.? As Douglass North (1984, 1985) has demonstrated, transaction costs are always subject to considerable scale economies: thus they are a major burden in small-scale, low-valued silver transactions in retail trade and wage payments, but far less so in very large volume, high-valued transactions, especially those involving gold in wholesale and foreign trade and major debt transactions.? Bolton is very ambiguous on whether coins circulated by weight or by tale, ignoring the scale economies of transactions, but seemingly supporting the former view (despite his evidence presented on pp. 120-21).? ??? An increased tendency for coins to be accepted only by weight, in higher-valued transactions, arose when the quality of the circulating coinage inevitably deteriorated over the years and decades following a general recoinage: when its silver contents diminished through normal wear and tear, but especially when? the coinage became more and more corrupted by the nefarious practices of clipping, ?sweating? and counterfeiting ? none of which would? have been profitable had coins earlier circulated by weight. Such deterioration, the loss of public confidence, and growing refusals to accept coins by tale meant that all coins lost their former agio, with four consequences.? First, merchants, still accepting coins by tale, sought compensation for perceived silver losses by raising their prices; second, good, higher-weight coins were culled and hoarded or exported, often in exchange for foreign counterfeits (Gresham?s Law); and third, bullion ceased to flow to the mints, so that the king lost? his seigniorage revenues.? Fourth, the king consequently had no alternative but to debase his coinage to bring it in alignment with the current depreciated circulation, thereby restoring the agio and resuming the flow of bullion to the mints.? In Feavearyear?s view, this purely defensive reaction to coinage deterioration explains all English silver debasements before Henry VIII?s Great Debasement of 1542-52: in particular, the 10.00% silver reduction of 1351; the 16.66% reduction of 1411/12; the 20.00% reduction of 1464; and the 11.11% reduction of 1526 ? so that fine silver content of the penny fell from 1.332 g in 1279 to just 0.639 g in 1526.? Henry VIII?s Great Debasement was undertaken, however, for purely fiscal motives (as had long been the continental pattern): to augment seigniorage revenues. But the evidence on seigniorage rate changes indicates that such fiscal motives had also prevailed in Edward IV?s silver and gold debasements of 1464-65 (Munro 2011).??? None of this analysis or any credible explanation for debasement can be readily found in Bolton, who even denies that English kings debased their coinages before the Great Debasement, on the overly literal grounds that the sterling silver fineness (92.5%) was always maintained (except for the 1336 issue of 10 dwt halfpence = 83.33% silver halfpence).? Almost all monetary historians define debasement instead as the reduction of the quantity of fine silver or gold in the money-of-account unit (pence, pound). That was achieved by a diminution in fineness (adding more base metal), and/or by a reduction in weight ? but also, for gold coins, by an increase in their official exchange rates.? Thus Edward IV?s initial debasement of gold in August 1464 was achieved by increasing the value of the traditional, physically-unchanged gold noble, from 6s 8d to 8s 4d.? In this respect, I also regret the absence, for a book on money in the medieval economy, of tables on English mint outputs (except for one graph on the Calais mint), in both pounds sterling and kilograms of fine metals, with related details on specific coinage issues in terms of weight, fineness, and mint charges ? though much of that information can be found in both Christopher Challis (1992) and Martin Allen (2011, 2012). ??? Other readers may, however, place much less emphasis on the issues raised in this review; and some, suspecting an unwarranted ?monetarist? bias in this review, may well support Bolton?s views, especially on the role of credit in the late-medieval economy.? Indeed, I must stress the significant contributions that Bolton has made in this field, especially those based on his ongoing research on the Borromei bankers (Milan), and the roles of other Italian merchant-banking firms in both English foreign and domestic trade, i.e. in London. As I indicated at the outset of the review, this book is one of the most important published in English economic history in the past two decades, and one in which the virtues well outweigh the defects.? I recommend that you buy it; if so, get the online bibliography now, before it disappears from the web.

References:

Allen, Martin (2011), ?Silver Production and the Money Supply in England and Wales, 1086 – c. 1500,? Economic History Review, 64: 114-31.??? Allen, Martin (2012), Mints and Money in Medieval England. Cambridge and New York: Cambridge University Press.??? ??? Allen, Robert (2001), ?The Great Divergence in European Wages and Prices from the Middle Ages to the First World War,? Explorations in Economic History, 38: 411-47.

Hatcher, John (1996), ?The Great Slump of the Mid-Fifteenth Century,? in Progress and Problems in Medieval England, ed. Richard Britnell and John Hatcher.? Cambridge and New York: Cambridge University Press, pp. 237-72.

Munro, John (2003b), ?Wage-Stickiness, Monetary Changes, and Real Incomes in Late-Medieval England and the Low Countries, 1300-1500:? Did Money Matter?? Research in Economic History, 21: 185-297.

Munro, John (2011), ?The Coinages and Monetary Policies of Henry VIII (r. 1509-47),? in The Collected Works of Erasmus: The Correspondence of Erasmus, Vol. 14:? Letters 1926 to 2081, A.D. 1528, trans. Charles Fantazzi and ed. James Estes.? Toronto: University of Toronto Press, pp. 423-76.

Munro, John (2012a), ?The Technology and Economics of Coinage Debasements in Medieval and Early Modern Europe: with Special Reference to the Low Countries and England,? in Money in the Pre-Industrial World: Bullion, Debasements and Coin Substitutes, ed. John Munro, Financial History Series no. 20. London: Pickering & Chatto Ltd., pp. 15-32, 185-89 (endnotes).

Munro, John (2012b), ?Coinage Debasements in Burgundian Flanders, 1384-1482: Monetary or Fiscal Policies?? in Comparative Perspectives on History and Historians: Essays in Memory of Bryce Lyon (1920-2007), ed. David Nicholas, James Murray, and Bernard Bacharach.? Medieval Institute Publications, University of Western Michigan: Kalamazoo, pp. 314-60.

Munro, John (2012c), ?Usury, Calvinism and Credit in Protestant England: From the Sixteenth Century to the Industrial Revolution,? in Religione e istituzioni religiose nell?economia europea, 1000 -1800/ Religion and Religious Institutions in the European Economy, 1000 -1800, ed. Francesco Ammannati. Florence: Firenze University Press, pp. 155-84.??? Munro, John, The Phelps Brown and Hopkins ?Basket of Consumables? Commodity Price Series and Craftsmen?s Wage Series, 1265-1700: Revised by John Munro, available online in Excel, at www.economics.utoronto.ca/munro5/ResearchData.html.

Van der Wee, Herman (2000), ?European Banking in the Middle Ages and Early Modern Period (476-1789),? in A History of European Banking, 2nd edn., ed. Herman Van der Wee and G. Kurgan-Van Hentenrijk,? Antwerp: Mercator, pp. 152-80.

John Munro is Professor Emeritus of Economics at the University of Toronto, specializing in the economic history of the late-medieval Low Countries and England, with a focus on money and textiles.? His recent publications in monetary history (2011 – 2012) are listed in the bibliography above; he has also recently published:? ?The Rise, Expansion, and Decline of the Italian Wool-Based Cloth Industries, 1100 -1730:? A Study in International Competition, Transaction Costs, and Comparative Advantage,? Studies in Medieval and Renaissance History, 3rd series, 9 (2012), 45-207.

Copyright (c) 2013 by EH.Net. All rights reserved. This work may be copied for non-profit educational uses if proper credit is given to the author and the list. For other permission, please contact the EH.Net Administrator (administrator@eh.net). Published by EH.Net (June 2013). All EH.Net reviews are archived at http://www.eh.net/BookReview