Discussion and comment on the latest research in business, economic and financial history

Category Archives: Globalization

The Emergence of an Export Cluster: Traders and Palm Oil in Early Twentieth-century Southeast Asia

by Valeria Giacomin (Harvard-Newcomen Fellow in Business History during 2017/2018)

Abstract: Malaysia and Indonesia account for 90 percent of global exports of palm oil, forming one of the largest agricultural clusters in the world. This article uses archival sources to trace how this cluster emerged from the rubber business in the era of British and Dutch colonialism. Specifically, the rise of palm oil in this region was due to three interrelated factors: (1) the institutional environment of the existing rubber cluster; (2) an established community of foreign traders; and (3) a trading hub in Singapore that offered a multitude of advanced services. This analysis stresses the historical dimension of clusters, which has been neglected in the previous management and strategy works, by connecting cluster emergence to the business history of trading firms. The article also extends the current literature on cluster emergence by showing that the rise of this cluster occurred parallel, and intimately related, to the product specialization within international trading houses.

In this article, Giacomin presents an archive-based historical analysis of how palm oil became one of the most important traded commodities from Southeast Asia to the world in the early- to mid-1900s. She uses the cluster (defined as a “geographically proximate group of interconnected companies and associated institutions in a particular field, linked by commonalities and complementarities”) approach to explain how the organizational structure of the pre-existing rubber cluster in the Malay peninsula (at the time a British colony) and Sumatra (under the Dutch) formed the basis for an emerging palm oil cluster in the same geographical region.

The first part of her paper focuses on how the regional rubber cluster structure developed. While the literature commonly credits unique local factors in the development of such clusters, Giacomin instead looks at nonlocal factors: (1) mainland Chinese traders that controlled regional trading routes from major Southeast Asian ports and brought in low-skilled tappers and harvesters from surrounding territories, and (2) Western traders that brought in capital inputs (seeds, machinery and finance) and highly-skilled human resources (estate managers and engineers) from Europe and other parts of the Empire and established headquarters in major European trading ports, allowing them to access crucial market information on demand. Both of these foreign merchant communities congregated in the emerging trading hub of Singapore, strategically located in between British Malaya and Dutch Sumatra, and developed a mutual dependency: Chinese contacts were vital for Western traders wanting to run a business in the Eastern colonies, while the Chinese needed Western traders to scale up their region-based commercial activity to a global scope.

The second part of the article explains how palm oil became the “spin-off” crop of the rubber cluster in the region. During the natural rubber boom in the early 1900s, the Malaya-Sumatra rubber cluster became over-dependent on this export and thus over-specialised in terms of existing practises, agronomic knowledge (through R&D agencies like the Rubber Research Institute), and coordinating institutions (eg. the Rubber Growers’ Association). When the advent of synthetic rubber in the 1920s caused natural rubber prices to fall, companies desperately looked to diversify their production to recoup and replace their losses. However, this over-specialisation meant that they could consider only a limited range of crops similar to rubber for diversification. As it happened, the rubber estate structure could be conveniently repurposed for into oil palm estates. Furthermore, the oil palm flourished in a much narrower latitude span compared to rubber, giving confidence to companies that the demand for palm oil would be more sustained since supply would be more limited.
Giacomin concludes that even though the literature often regards over-specialisation as fatal, in the case of the Southeast Asian rubber cluster, this serendipitously led to the emergence of one of the most enduring regional clusters serving the global economy. Today, Indonesia and Malaysia account for over 90% of global palm oil exports.

While the significance of the rubber sector in paving the way for palm oil in Malaysia and Indonesia is well known, this paper remains an important and significant addition to the current literature, not only on general business management and strategy, but also more specifically in terms of (1) palm oil expansion and development and (2) agricultural systems (estate vs. smallholdings).

Firstly, the specific role of nonlocal (especially Chinese) entrepreneurs in connecting the production areas in this region to the consumption areas in the West was previously not well understood. In the context of the “global north” and “global south”, palm oil can be considered a uniquely “southern” vegetable oil. Compared to other oils like sunflower, rapeseed, and soya bean, the production, major business players, beneficiaries and direct impacts of palm oil is situated more comparatively in the global south. Giacomin alludes to this in reference to the narrow latitude where oil palm can be grown. This northern/southern framing has coloured much of the recent debates and controversies over palm oil today. This paper’s analysis on the historical role of Chinese merchants is especially useful in further informing the idea of palm oil as a “southern” oil, while at the same time, the equally important role of the Western merchants that Giacomin highlights may be useful in moderating certain northern arguments in this ongoing debate.

Secondly, the historical nature of Giacomin’s analysis of this sector is especially timely in the current period where other regions, like West Africa and Latin America, are looking to increase their global trading share of palm oil. Giacomin mentions that even though the oil palm originated from and was first produced commercially in West Africa during colonial times, Western African territories were unable to effectively penetrate global markets because they did not display the same institutional cohesion across neighbouring territories, something that Southeast Asia managed to do through the pre-existing rubber cluster. This “cluster” model may thus provide an exemplar to be used by emerging palm oil production regions and companies as an effective way to possibly break the current oligopoly (Indonesian and Malaysian firms) which is the palm oil industry. Especially for West Africa, which is considered the current “greenfield” area for palm oil outside Southeast Asia, current strategies can be developed to avoid past mistakes.

Finally, Giacomin’s analysis of early smallholders is useful to inform current discussions on the ideal agricultural systems for oil palm. Her paper argues that in the mid-20th century, the fact that the palm oil was an estate crop (involving high costs and favouring large-scale production) provided a solution to the problem previously faced by rubber companies that were facing competition from and losing market share to rubber smallholdings. While this might have been the case historically, oil palm today has been successfully adapted to the smallholder model in both Indonesia and Malaysia, with a significant share of both countries’ production (about 40% each) coming from either organised or independent smallholders. Giacomin’s analysis stops at the early decolonialisation period, before the newly independent nations began to formulate oil palm smallholder schemes as a strategic tool for rural development and poverty eradication for both countries. Her analysis however can serve as a useful starting point in the ongoing debates on if and how both the estate and smallholder systems can co-exist efficiently and in harmony.

Overall, this paper is a valuable piece of business history that helps to further shed light on a controversial agro-economic sector often shrouded in secrecy. The fact that palm oil continues to be a hot topic worldwide today underlines the relevance and importance of such forays into history to inform the present.

Abstract: In the decade after the Second World War IBM rebuilt its European operations as integrated, wholly owned subsidiaries of its World Trade Corporation, chartered in 1949. Long before the European common market eliminated trade barriers, IBM created its own internal networks of trade, allocating the production of different components and products between its new subsidiaries. Their exchange relationships were managed centrally to ensure that no European subsidiary was a consistent net importer. At the heart of this system were eight national electric typewriter plants, each assembling parts produced by other European countries. IBM promoted these transnational typewriters as symbols of a new and peaceful Europe and its leader, Thomas J. Watson, Sr., was an enthusiastic supporter of early European moves toward economic integration. We argue that IBM’s humble typewriter and its innovative system of distributed manufacturing laid the groundwork for its later domination of the European computer business and provided a model for the development of transnational European institutions.

Prizes are awarded all the time for “best article” in a particular field, calling our attention to a well-executed, thoughtful one. But, occasionally, a prize winning article signals bigger shifts in a discipline than might otherwise be noticed. With this year’s award of the Business History Conference’s “Mira Wilkins Prize,” for the best article published in Enterprise & Society, we have such a signal.

Petri Paju and Thomas Haigh wrote “IBM Rebuilds Europe: The Curious Case of the Transnational Typewriter,” published in June 2016. They were recognized for “the best article on international business history,” the objective of this prize, but it is far more than good international business history.

The article chronicles how IBM created an internal network across eight national electric typewriter plants in post-World War II Europe to manufacture parts and to assembly these products. While electric typewriters were in great demand and IBM made what many considered to be the best one, the company created an internal network for their manufacture and distribution that transcended international borders in the decade after the war, presaging what would happen for some European products after the establishment of the European Union. But that was never solely the point—to create a European-wide market by governments—rather, it was to drive down production costs, increase demand for and the ability to deliver enough machines, while promoting IBM management’s belief that “World Peace through World Trade” could be a global objective for nations and companies. The authors trace how parts were made in one country, shipped to another, put together then sold, called the “Interchange Plan.” This experience taught IBM management how to create a more formal pan-European wide, later worldwide organization in 1949 that could manufacture, sell, and support its products called IBM World Trade. Within a half generation, World Trade did as much business as the American side of IBM.

Lessons learned in forming a pan-European typewriter business made it possible for IBM to develop a pan-European computer business that quickly dominated the mainframe business in Western Europe and in other parts of the world. Just as important, when IBM moved into the computer business, it already had factories, sales offices, and experienced employees in those countries that would become its best customers. These include Great Britain, France, West Germany, the Nordics, Italy, Spain, and a sprinkling presence in every country that eventually became part of the EU. The authors explain how the company created and learned from its “Interchange Plan,” operationally and strategically. They explored the accounting level to explain how money and budgets were exchanged across borders when governments had yet to sort out those issues, let alone even allow such exchanges.

The benefits to IBM were both obvious and extraordinary. Obvious ones included reduced operating costs for the manufacture and increased sale of typewriters. Less obvious, but ultimately more important, “this system would also foster interdependence among the various national [IBM] firms,” while spreading capabilities across multiple countries so that if one nation were to nationalize or block local IBM production, as occurred during World War II, another plant could pick up the slack. The company used its system in its public relations campaign to promote international trade through American managerial leadership and “to meet the challenges of communism” in the Cold War. Other American corporations—all of them with close ties to IBM’s management—took note of what IBM was learning and applied those lessons as well. IBM’s country organizations could also claim to be local, since each employed nationals, Fins in Finland, French in France, and so forth.

The lesson urged by these two young historians is an appropriate one at the moment: “think more carefully about the assumption that postwar globalization of European trade can be reduced to ‘Americanization’,” because IBM’s experience reflected a “hybridization of U.S. technology and management in postwar Europe.” Apply their suggestion worldwide. IBM was also prepared to experiment and operate in ways that valued expansion into new markets even at the costs of profits. That is one reason why it came to dominate the mainframe market so fast and for so many decades. The wisdom of today’s corporate fixation on shareholder value is challenged by this study of how IBM ran its typewriter business.

Perhaps the greater lesson, the more significant observation for why this prize this year is so important, lies elsewhere. For the past two decades, a month has barely gone by without an historian or economist publishing on the interactions of computing technology and business management. E&S is not alone in doing so; Technology & Culture has published some two-dozen similar articles in the new century, and Information & Culture is rapidly becoming another journal with a mix of business/information technology conversations. Petri Paju and Thomas Haigh are more than two gifted prolific article writers, they are teaching a new generation of scholars how to understand the role of information technologies and of management, business operations, and corporate strategy in a world filled with computers. Simply put, this article is seminal, worthy of being studied across multiple disciplines. The Mira Wilkes Prize Committee is to be congratulated for not letting this paper slip through the cracks.

Abstract: British general incorporation law granted companies an extraordinary degree of contractual freedom. It provided companies with a default set of articles of association, but incorporators were free to reject any or all of the provisions and write their own rules instead. We study the uses to which incorporators put this flexibility by examining the articles of association filed by three random samples of companies from the late nineteenth and early twentieth centuries, as well as by a sample of companies whose securities traded publicly. Contrary to the literature, we find that most companies, regardless of size or whether their securities traded on the market, wrote articles that shifted power from shareholders to directors. We find, moreover, that there was little pressure from the government, shareholders, or the market to adopt more shareholder-friendly governance rules.

Review by John Turner (Centre for Economic History, Queen’s University Belfast)

Tim Guinnane, Ron Harris and Naomi Lamoreaux are three scholars that every young (and old) economic historian should seek to emulate. This paper showcases once again their prodigious talent – there is careful analysis of the institutional and legal setting, a lot of archival evidence, rigorous economic analysis, and an attempt to understand how contemporaries viewed the issue at hand.

In this paper, Guinnane, Harris and Lamoreaux (GHL) examine the corporate governance of UK companies in the late nineteenth and early twentieth centuries. The UK liberalised its incorporation laws in the 1850s and introduced its first Companies Act in 1862. From a modern-day perspective, this Act enshrined very little in the way of protection for shareholders. However, the Appendix to the 1862 Companies Act contained a default set of articles of association, which was the company’s constitution. This Appendix, known as Table A, provided a high level of protection for shareholders by modern-day standards (Acheson et al., 2016). However, the majority of companies did not adopt Table A; instead they devised their own articles of association.

The aim of GHL’s paper is to analyse articles of associations in 1892, 1912 and 1927 to see the extent to which they shifted power from shareholders to directors. To do this, GHL collected three random samples of circa 50 articles of association for 1892, 1912 and 1927. Because most (if not all) of these companies did not have their securities traded on stock markets, they also collected sample of 49 commercial and industrial companies from Burdett’s Official Intelligence for 1892 that had been formed after 1888. However, only 23 of these companies had their shares listed on one of the UK’s stock exchanges.

GHL then take their samples of articles to see the extent to which they deviated from the clauses in Table A. Their main finding is that companies tended to adopt governance structures in their articles which empowered directors and practically disenfranchised shareholders. This was the case no matter if the company was small or large or public or private. They also find that this entrenchment and disenfranchisement becomes more prominent over time. However, GHL unearth a puzzle – they find shareholders and the market appeared to have been perfectly okay with poor corporate governance practices.

How do we resolve this puzzle? One possibility is that shareholders (and the market) at this time only really cared about dividends. High dividend pay-out ratios in this era kept managers on a short leash and reduced the agency costs associated with free cash flow (Campbell and Turner, 2011). Interestingly, GHL suggest that this may have made it more difficult for firms to finance productivity-enhancing investments. In addition, they suggest that the high-dividend-entrenchment trade-off may have locked in managerial practices which inhibited the ability of British firms to respond to future competitive pressures and may ultimately have ushered in Britain’s industrial decline.

Another solution to the puzzle, and one that GHL do not fully explore, is that the ownership structure of the company shaped its articles of association. The presence of a dominant owner or founding family ownership would potentially lessen the agency problem faced by small shareholders. In addition, founders may not wish to give too much power away to shareholders in return for their capital. On the other hand, firms which need to raise capital from lots of small investors on public markets may adopt more shareholder-friendly articles. The vast majority of companies in GHL’s sample do not fall into this category, which might go some way to explaining their findings.

A final potential solution is that the vast majority of firms which GHL examine may have raised capital in a totally different way than public companies, and this shaped their articles of association. These firms probably relied on family, religious and social networks for capital, and the shareholders trusted the directors because they personally knew them or were connected to them through a network. Indeed, we know precious little about how and where the multitude of private companies in the UK obtained their capital. Like all great papers, GHL have opened up a new avenue for future scholars. The interesting thing for me is what happens when private firms went public and raised capital. Did they keep their articles which entrenched directors and disenfranchised shareholders?

Unlike the focus of GHL on mainly private companies, a current Queen’s University Centre for Economic History working paper examines the protection offered to shareholders by circa 500 public companies in the four decades after the 1862 Companies Act (Acheson et al., 2016). Unlike GHL, it takes a leximetric approach to analysing articles of association. Acheson et al. (2016) have two main findings. First, the shareholder protection offered by firms in the nineteenth century was high compared to modern-day standards. Second, firms which had more diffuse ownership offered shareholders higher protection.

How do we reconcile GHL and Acheson et al. (2016)? The first thing to note is that most of Acheson et al’s sample is before 1892. The second thing to note is that in a companion paper, Acheson et al. (2015) identify a major shift in corporate governance and ownership which started in the 1890s – companies formed in that decade had greater capital and voting concentration than those formed in earlier decades. In addition, unlike companies formed prior to the 1890s, the insiders in these companies were able to maintain their voting rights and entrench themselves. This corporate governance turn in the 1890s is where future scholars should focus their attention.

Abstract Rogue trading has been a persistent feature of international financial markets over the past thirty years, but there is remarkably little historical treatment of this phenomenon. To begin to fill this gap, evidence from company and official archives is used to expose the anatomy of a rogue trading scandal at Lloyds Bank International in 1974. The rush to internationalize, the conflict between rules and norms, and the failure of internal and external checks all contributed to the largest single loss of any British bank to that time. The analysis highlights the dangers of inconsistent norms and rules even when personal financial gain is not the main motive for fraud, and shows the important links between operational and market risk. This scandal had an important role in alerting the Bank of England and U.K. Treasury to gaps in prudential supervision at the end of the Bretton Woods pegged exchange-rate system.

Review by Adrian E. Tschoegl (The Wharton School of the University of Pennsylvania)

Since the 1974 rogue trading scandal at Lloyds’s Lugano branch we have seen more spectacular sums lost in rogue trading scandals. What Dr Catherine Schenk brings to our understanding of these recurrent events is the insight that only drawing on archives, both at Lloyds and at the Bank of England, can bring. In particular, the archives illuminate the decision processes at both institutions as the crisis unfolded. I have little to add to her thorough exposition of the detail so below I will limit myself to imprecise generalities.

Marc Colombo, the rogue trader at Lloyds Lugano, was a peripheral individual in a peripheral product line, in a peripheral location. As Schenk finds, this peripherality has two consequences, the rogue trader’s quest for respect, and the problem of supervision. Lloyds Lugano is not an anomaly. An examination of several other cases (e.g. Allied Irish, Barings, Daiwa, and Sumitomo Trading), finds the same thing (Tschoegl 2004).

In firms, respect and power come from being a revenue center. Being a cost center is the worst position, but being a profit center with a mandate to do very little is not much better. The rogue traders that have garnered the most attention, in large part because of the scale of their losses were not malevolent. They wanted to be valued. They were able to get away with their trading for long enough to do serious damage because of a lack of supervision, a lack that existed because of the traders’ peripherality.

In several cases, Colombo’s amongst them, the trader was head of essentially a one-person operation that was independent of the rest of the local organization. That meant that the trader’s immediate local supervisor had little or no experience with trading. Heads of branches in a commercial bank come from commercial banking, especially commercial lending. Commercial lending is a slow feedback environment (it may take a long time for a bad decision to manifest itself), and so uses a system of multiple approvals. Trading is a fast feedback environment. The two environments draw different personality types and have quite different procedures, with the trading environment giving traders a great deal of autonomy within set parameters, an issue Schenk addresses and that we will discuss shortly.

Commonly, traders will report to a remote head of trading and to the local branch manager, with the primary line being to the head of trading, and the secondary line being to the local branch manager. This matrix management developed to address the problem of the need to manage and coordinate centrally but also respond locally, but matrix management has its limitations too. As Mathew points out in the New Testament, “No man can serve two masters, for either he will hate the one, and love the other; or else he will hold to the one, and despise the other” (Matthew (6:24). Even short of this, the issue that can arise, as it did at Lloyds Luggano, is that the trader is remote from both managers, one because of distance (and often time zone), and the other because of unfamiliarity with the product line. A number of software developments have improved the situation since 1974, but as some recent scandals have shown, they are fallible. Furthermore, the issue still remains that at some point the heads of many product lines will report to someone who rose in a different product line, which brings up the spectre of “too complex to manage”.

The issue of precautionary or governance rules, and their non-enforcement, is a clear theme in Schenk’s paper. Like the problem of supervision, this too is an issue where one can only do better or worse, but not solve. All rules have their cost. The largest may be an opportunity cost. Governance rules exist to reduce variance, but that means the price of reducing bad outcomes is the lower occurrence of good outcomes. While it is true, as one of Schenk’s interviewees points out, that one does not hear of successful rogue traders being fired, that does not mean that firms do not respond negatively to success. I happened to be working for SBCI, an investment banking arm of Swiss Bank Corporation (SBC), at the time of SBC’s acquisition in 1992 of O’Connor Partners, a Chicago-based derivatives trading house. I had the opportunity to speak with O’Conner’s head of training when O’Connor stationed a team of traders at SBCI in Tokyo. He said that the firm examined too large wins as intently as they examined too large losses: in either case an unexpectedly large outcome meant that either the firm had mis-modelled the trade, or the trader had gone outside their limits. Furthermore, what they looked for in traders was the ability to walk away from a losing bet.

But even small costs can be a problem for a small operation. When I started to work for Security Pacific National Bank in 1976, my supervisor explained my employment benefits to me. I was authorized two weeks of paid leave per annum. When I asked if I could split up the time he replied that Federal Reserve regulations required that the two weeks be continuous so that someone would have to fill in for the absent employee. Even though most of the major rogue trading scandals arose and collapsed within a calendar year, the shadow of the future might well have discouraged the traders, or led them to reveal the problem earlier. Still, for a one-person operation, management might (and in some rogue trading scandals did), take the position that finding someone to fill in and bring them in on temporary duty was unnecessarily cumbersome and expensive. After all, the trader to be replaced was a dedicated, conscientious employee, witness his willingness to forego any vacation.

Lastly, there is the issue of Chesterton’s Paradox (Chesterton 1929). When a rule has been in place for some time, there may be no one who remembers why it is there. Reformers will point out that the rule or practice is inconvenient or costly, and that it has never in living memory had any visible effect. But as Chesterton puts it, “This paradox rests on the most elementary common sense. The gate or fence did not grow there. It was not set up by somnambulists who built it in their sleep. It is highly improbable that it was put there by escaped lunatics who were for some reason loose in the street. Some person had some reason for thinking it would be a good thing for somebody. And until we know what the reason was, we really cannot judge whether the reason was reasonable.”

Finally, an issue one needs to keep in mind in deciding how much to expend on prevention is that speculative trading is a zero-sum activity. A well-diversified shareholder who owns both the employer of the rogue trader and the employers of their counterparties suffers little loss. The losses to Lloyds Lugano were gains to, inter alia, Crédit Lyonnais.

There is leakage. Some of the gainers are privately held hedge funds and the like. Traders at the counterparties receive bonuses not for skill but merely for taking the opposite side of the incompetent rogue trader’s orders. Lastly, shareholders of the rogue traders firm suffer deadweight losses of bankruptcy when the firm, such as Barings, goes bankrupt. Still, as Krawiec (2000) points out, for regulators the social benefit of preventing losses to rogue traders may not exceed the cost. To the degree that costs matter to managers, but not shareholders, managers should bear the costs via reduced salaries.

This paper updates an earlier article published in Business History Review that concluded that by the second half of the 1990s, there had been a profusion of new, purportedly practical ideas about strategy, many of which embodied some explicit dynamics. This update provides several indications of a drop-off since then in the rate of development of new ideas about strategy but also a continued focus, in the new ideas that are being developed, on dynamics. And since our stock of dynamic frameworks has, based on one enumeration, more than doubled in the last fifteen to twenty years, updating expands both the need and the empirical basis for some generalizations about the types of dynamic strategy frameworks—and strategy frameworks in general—that managers are likely to find helpful versus those that they are not.

Review by Kyle Bruce (Macquarie University, Australia)

Editor’s note

Ghemawat’s 2017 paper below should not be read in isolation but as part of a round table organized at Harvard Business School that brought together historians and management scholars to discuss the origins of ideas in business and management. The results of the round table were published as a special edition of the Business History Review. In this sense, Ghemawat’s contribution to the special issue and its discussion by Chris McKenna (in the same special issue) came to an independent yet similar conclusion to that expressed by Nobel laureate Robert Shiller, who suggested “that in the age of social information networks, economists need to rethink how and why information really spreads.” (See a summary of Shiller’s ideas in The Role of Narratives in Economics).

It is laudable that the executive editors of the Business History Review created a space to disseminate the results of the round table through the journal. However, as you will read below, Kyle Bruce questions whether this is the right way to engage other management scholars in business history as, strictly speaking, the contribution by Ghemawat would be found wanting as scholarly work of international standing.

A final note is that in its comments to Ghemawat, even McKenna gets it wrong by pointing to Lotus 1-2-3 as the first spreadsheet. It actually was VisiCalc.

Having said that, the aim in this space is to generate academic debate through a blog format. So by all means do chip in.

As a historian and teacher of strategy and, moreover, as a close follower of Ghemawat’s work, I was very much looking forward to his recent update of his 2002 BHR paper on the history of the sub-discipline. I habitually invoke the decade-and-a-half old piece as background reading for my Executive MBA strategy students and hitherto have experienced little, if any, pushback from students typically cagey about the words “theory” or “history”. Regrettably, I am not so sure the updated paper under review here will escape unscathed for the simple reason that it is pretty tough to follow. Let me explain.

After briefly overviewing the 2002 paper that in essence discerned a profusion of new ideas about strategy – particularly those embodying a more dynamic approach – dating from the early to mid-80s, Ghemawat introduces his new findings. After a big peak in the mid-90s, there has been a marked drop-off in new ideas, but dynamics “is a sustained interest focus of strategic innovation rather than one of passing interest” (p. 5; emphasis added). So far, so good you might think, but I started to worry about the phraseology (“strategic innovation”?) attendant on the use of analytical tools from strategy and adjacent sub-disciplines to make sense of his findings; namely, “what should one make of the drop-off overall and the shift toward more attention to dynamics? And what, if anything, should be done?” (p. 8).

Pankaj Ghemawat

Unless the strictures concerning the dreaded “so what?” question have been lifted in history journals such as BHR, I could not discern after several reads a compelling argument as to why readers should be at all bothered by the findings presented? For students of the strategy-as-practice literature, for instance, the suggestion there’s fewer models and frameworks out there for practising managers to employ is not a concern given they probably don’t use them anyway. For my MBA students who routinely complain of framework fatigue, again, the theory drop-off is not a problem. And so, for me, the remainder of the paper was rather superfluous and unnecessarily complex. Curiously, I think Ghemawat makes it so when he concludes that while it’s certain there’s been a drop-off in the “rate of development of big new strategy ideas/frameworks, it is much harder to be definite about the welfare implications” (p. 10; emphasis added). For me, this conclusion renders redundant both the ensuing “what is to be done” question he poses, as well as the next eight-and-half pages of the article devoted to “a critical assessment of frameworks new and old” (p. 2).

After several reads of these aforementioned pages, I could not really follow or appreciate the “irreversibility” and “uncertainty” dimensions utilised to assess how dynamic current frameworks really are. However, I felt comforted when Ghemawat concludes that “quite a few” of said frameworks “seem subject to some practical limitations” (p. 19). This comfort was short-lived, though, when he finishes the paper with the frustrating and seemingly throwaway line that the way forward, as it were, “is to shift some attention away from the chronologies of frameworks to historiography that attempts to assess them in some fashion” (p. 21). I immediately asked myself: “well, why didn’t he just do this, then??”

For me, and I trust also BHR readers, a historiographical piece embodying intellectual history, actor-network theory, or sociology of scientific knowledge to account for the “trials of strength” in strategy theory, the tension between contributions from the academy and those from business practice, and the current fascination with dynamics, would have been an easier and more interesting read. Like much being published in business and management history journals of late, Ghemawat’s paper is short on actual history and, notwithstanding the final sentence, even short on how to DO history. I was left wondering why this paper was published in this journal and asking myself what this paper’s place tells me about BHR? I have no answers for these questions but look forward to some in due course.

Why did cycling become professional as early as the late nineteenth century, while other sports (such as rugby) and other sport events (such as the Olympic Games) remained amateur until the 1980s? Why are the organizers of the most important bicycle races private companies, while in other sports such as soccer the main event organizer is a nonprofit organization? To what extent have bicycle races changed since the late nineteenth century? And how does cycling reflect long-term economic changes? The history of professional road cycling helps answer these questions and understand many related phenomena. This chapter provides a long-term, historical perspective on (1) professional road cycling’s economic agents, i.e., the public, race organizers, team sponsors and riders, and the relationships amongst them; (2) cycling’s governing body, the International Cycling Union; and (3) professional cycling’s final product, i.e., the show of bicycle races. More precisely, the chapter mostly focuses on the history of male professional road cycling in Western Europe since the late nineteenth century. It is founded on both an analysis of quantitative time series on the Grand Tours (and, to some extent, the classics) and a review of the existing literature on the history of professional cycling, whether economic history, institutional history, cultural history, or sport history.

Revised by: Stefano Tijerina, Ph.D.

The professionalization and commercialization of sports illustrates the forces of capitalism in action, as its culture and institutional structures transition from the local to the global in response to the demands of the market and the increasing interdependence among multiple private and public stakeholders. In his brief history of professional road cycling Jean-François Mignot demonstrates how the sport is transformed throughout the twentieth century as it transitioned from amateur to professional. Mignot argues that the professionalization of this sport anticipated many other international sports because the forces of capitalism pressured the athletes to abandon their amateur status early on in order to secure an income.[1] His research reveals the early infiltration of the private sector within the culture of cycling in Europe, the institutional transformation of the sport, the market’s impact on the institutional structure of bicycle racing, and its integration into the global system. Ultimately, his historic analysis allows the possibility of drawing parallels with the processes of transformation experienced by other goods, commodities, and services that adapted to the inevitable pressures of the expansion of capitalism.

Jean-François Mignot’s research shows that the idea of organizing road race competitions around the commonly used bicycle emerged from the desire of newspapers across Europe to sell more newspapers through this new and creative marketing scheme. Newspapers in France, Belgium, Spain, and Italy began organizing races on public roads in the late 1800s to show the public that human and bicycles could cover vast distances across flat and mountainous terrain. As indicated by Mignot, early races of 25 to 70 hours in duration covering 250 to 400 kilometers became epic sporting events of duration and perseverance among extraordinary European athletes.[2] The media’s construct of these epic figures created the thirst for road cycling, but it was the fact that the spectator standing on the side of the road was only able to watch the spectacle for a few seconds and depended on the print media to recreate the rest of the race, that pushed newspapers into the sponsorship business. It was this interdependent relation between spectator, athlete, and newspapers that inspired the print media industry to organize these road races, hoping that races would become magnets for advertisement sales. As indicated by Mignot, “cycling fans demanded more information” and “pictures of the race,” and the race organizing newspapers were interested in supplying the demand by covering the races in detailed form as they watched circulations increase.[3]

The one-day races or “Classics” and the three-week “Grand Tours” became the backbone of professional road racing in Europe. By the 1930s newspapers had monopolized the sponsorship of the events, while fans filled the roadways accompanied by publicity caravans “that distributed product samples to spectators.”[4] Meanwhile bicycle and tire companies became the sponsors of teams, as individual riders were replaced by teams that worked on behalf of the stars that made up the top cycling teams in Europe.[5]

In the early stages of professionalization, cycling stars did not receive any wages and were therefore forced to secure their income through race earnings. The increase in the popularity of the sport was followed by the increase in riders’ income.[6] The interdependent relations necessary for the expansion of capitalism slowly developed; increasing sales motivated the newspapers to improve the quality of the spectacle by increase the race winnings, forcing the sponsors to offer better wages in order to recruit and maintain the loyalty of the top cyclists, ultimately attracting more fan-base that in turn attracted other secondary sponsors that turned the caravans into marketing spectacles as well. This became even more lucrative as other means of communication joined in, particularly radio and later on television.

Jean-François Mignot points out at the first three decades of the Cold War was a period of crisis for the sport in Europe, emphasizing that urbanization and the increasing sales of motorcycles forced bicycle manufacturers to decrease their team sponsorship funding and ultimately sending the salaries of professional riders in a downward spiral.[7] This, argued Mignot, forced the professional rider to seek sponsorships outside of the bicycle world.[8] The stars and their teams began to tap the “extra-sportif” market for sponsorship and this market segment was quick to capitalize on the opportunity.[9]

Jean-François Mignot points out that sponsoring newspapers and bicycle companies interested in protecting their own profit margins opposed the penetration of “extra-sportif” sponsors by trying to control the rules of the sport in order to impede their participation, but at the end the market forces prevailed.[10] This European crisis that unfolded between the 1950s and 1980s was in fact the initial era of global commercialization of the sport. Mignot’s Euro centrism impedes him from moving beyond the region’s Grand Tours and Classics, not recognizing that the “extra-sportif” sponsorships that challenged the status quo took professional cycling outside of Europe and introduced it to the rest of the world. For example, by the 1950s radio transmissions of the European races were common in distant places like Colombia where their own private sectors had replicated the European business model and established lucrative professional road races to supply the local demand for professional bicycle road racing. The first edition of the Colombian Grand Tour, La Vuelta a Colombia, was organized in 1951, and by then several local Classics like the Tunja-Bucaramanga and the Medellín-Sansón were already engrained in the Colombian cycling culture. As in the case of Europe, local newspapers like El Tiempo became interested in sponsoring the local Grand Classic as a means to increase sales and circulation, but contrary to the European distrust of “extra-sportif” sponsors, the Colombian organizers welcomed other private local sponsors including the national airline Avianca, the Bavaria brewery, Avisos Zeón and the Flota Mercante Grancolombiana.[11]

The crisis of professional bicycle road racing in Europe described by Mignot was certainly caused by a decreasing popularity of the sport and the internal struggles over the monopoly of the sponsorship and management of the sport, but it was also the market’s response to the emergence of other professional sports in Europe as well as the professional cyclist’s ability to capitalize on the globalization of the sport. It was an illustration of how, in a capitalist system, the internal saturation of a market led to the natural expansion into other global markets, as in the case of Colombia in the 1940s and 1950s.[12]

Such was the case of French Born, José Beyaerst, the 1948 Olympic road race champion who moved to Colombia after the Second World War, winning the second edition of the Vuelta a Colombia in 1952 and later on establishing a career as the coach for the Colombian national cycling team.[13] Beyaerst would make Colombia his home, developing the professionalization of the sport and becoming a key player in what would later become one of the cycling powers of the world. The expansionism of the sport would reach all corners of the world between the 1950s and the 1980s, it was a period of crisis for Europe as Mignot points out but it was a glorious time for global professional bicycle road racing.

Television was the game-changer, spearheading the resurgence of professional cycling in Europe in the 1980s. Taking advantage of the integration of Europe, race organizers capitalized on the magic of television to attract new European audiences, redesigning the stage circuits of the Grand Tours (Giro d’Italia, Vuelta a España, and the Tour de France) with the intention of tapping new urban centers that were outside of Spain, France, and Italy.[14] Television also globalized the European Grand Tours, introducing the cycling stars to the world, providing an opportunity for sponsors to reach a global audience, selling commercial air space, and as a result increasing revenues, salaries and profits for the whole sport.

Jean-François Mignot points out that the globalization of the sport also impacted the nature of cycling teams. By the 1980s the teams competing in the Grand Tours were no longer made up of Spanish, Italian, and French riders; their nationalities diversified and so did their sponsors.[15] Although Mignot highlights the fact that by 1986 the American Greg LeMond had won the Tour de France, Colombia’s Lucho Herrera had conquered the Vuelta a España (1987), the Russian Evgueni Berzin the Giro d’Italia (1994), and the Australian Cadel Evans the Tour de France (2011), he does not point out that these foreign cyclists also brought with them new local sponsors that then began to compete with European sponsors.[16] Mignot avoids talking about the American Lance Armstrong, leaving a large gap in the history of the globalization of the sport, considering that the American rider won seven consecutive Tour de France championships (1999-2005) before the US Anti-Doping Agency and the Union Cycliste Internationale stripped him from his titles after a doping scandal. Although LeMond popularized cycling racing in the United States it was Armstrong that converted it into a multi-billion dollar industry bringing in American brands such as RadioShack and Motorola into the world of cycling.

Jean-François Mignot’s research illustrates how the sport expanded globally as the Western World exported the idea of the professionalization and commercialization of cycling, taking advantage of the expansion of Western culture across the world, the increasing leisure time and incomes of the global population, and the increasing communications technology that allowed viewers from across the world to connect with the live stage by stage action of the Grand Tours and the Classics. Nevertheless, his Euro centric approach impedes him from explaining how the professionalization of the sport evolved outside of Europe. Although Mignot clarified early on that his analysis centered on Europe, this approach weakened his argument regarding the globalization of the sport and its repercussion on the European construct, as foreigners began to conquer and dominate the sport as in the case of Americans Greg LeMond and Lance Armstrong, or the current stars South African born Christopher Froome and the Colombian climber Nairo Quintana. The incorporation of a broader global perspective would have allowed Mignot to test whether or not the professionalization of the sport in other markets was also spearheaded by other local newspapers or if on the contrary other media and non-media-based sponsors jumped on this business opportunity. It would have also been important to identify when professionalization took place in other markets to compare whether or not the influence of the European sport transcended the borders in a timely manner or even identifying political, economic, social, and cultural factors that delayed its expansion into other global markets. Moreover, it would have been important for Mignot to link the policies of the Union Cycliste Internationale to the globalization of the sport, as well as the escalation of global competition among bicycle manufacturers, and the global competition between scientists, technological designers, and pharmaceutical industries that centered on the legal and illegal preparation of the current athlete.

Abstract: Comparisons of economic performance over space and time largely depend on how statistical evidence from national accounts and historical estimates are spliced. To allow for changes in relative prices, GDP benchmark years in national accounts are periodically replaced with new and more recent ones. Thus, a homogeneous long-run GDP series requires linking different temporal segments of national accounts. The choice of the splicing procedure may result in substantial differences in GDP levels and growth, particularly as an economy undergoes deep structural transformation. An inadequate splicing may result in a serious bias in the measurement of GDP levels and growth rates.

Alternative splicing solutions are discussed in this paper for the particular case of Spain, a fast growing country in the second half of the twentieth century. It is concluded that the usual linking procedure, retropolation, has serious flows as it tends to bias GDP levels upwards and, consequently, to underestimate growth rates, especially for developing countries experiencing structural change. An alternative interpolation procedure is proposed.

Reviewed by Cristián Ducoing

Dealing with National Accounts (hereafter NA) is a hard; dealing with NA in the long run is even harder…..

Broadly speaking, a quick and ready comparison of economic performance for a period of sixty years or more, would typically source its data from the Maddison project. However and as with any other human endevour, this data is not free from error. Potential and actual errors in measuring economic growth is highly relevant economic history research, particularly if we want to improve its public policy impact. See for instance the (brief) discussion in Xavier Marquez’s blog around how the choice of measure can significantly under or overstate importance of Lee Kuan Yew as ruler of Singapore.

The paper by Leandro Prados de la Escosura, therefore, contributes to a growing debate around establishing which is the “best” GDP measure to ascertain economic performance in the long run (i.e. 60 or more years). For some time now Prados de la Escosura has been searching for new ways to measure economic development in the long run. This body of work is now made out of over 60 articles in peer reviewed journals, book chapters and academic books. In this paper, the latest addition to assessing welfare levels in the long run, Prados de la Escosura discusses the problems in using alternative benchmarks and issues of spliced NA in a country with a notorious structural change, Spain. The main hypothesis developed in this article is to ascertain differences that could appear in the long run NA according to the method used to splice NA benchmarks. So, the BIG question is retropolation or interpolation?

Retropolation: As Prados de la Escosura says, involves a method that is …, widely used by national accountants (and implicitly accepted in international comparisons). [T]he backward projection, or retropolation, approach, accepts the reference level provided by the most recent benchmark estimate…. In other words, the researcher accepts the current benchmark and splits it with the past series (using the variation rates of the past estimations). What is the issue here? Selecting the most recent benchmark results in a higher GDP estimate because, by its nature, this benchmark encompasses a greater number of economic activities. For instance, the ranking of relative income for the UK and France changes significantly when including estimates of prostitution and narcotrafic. This “weird” example shows how with a higher current level and using past variation rates, long-run estimates of GDP will be artificially improved in value. This approach thus can lead us to find historical anomalies such as a richer Spain overtaking France in the XIXth century (See Prados de la Escosura figure 3 below).

An alternative to the backward projection linkage is the interpolation procedure. This method accepts the levels computed directly for each benchmark year as the best possible estimates, on the grounds that they have been obtained with ”complete” information on quantities and prices in the earlier period. This procedure keeps the initial level unaltered, probably being lower than the level estimated by the retropolation approach.

There are two more recent methods to splice NA series derived from the methods described above: the “mixed splicing” proposed by Angel de la Fuente (2014), which uses a parameter to capture the severity of the initial error in the original benchmark. The problem with this solution is the arbitrary value assigned (parameter). Let’s see it graphically and using data for the Maddison project. As it is well known, these figures were recently updated by Jutta Bolt and Jan Luiten van Zanden while the database built thanks to the contributions of several scholars around the world and using a same currency (i.e. the international Geary-Kheamy dollar) to measure NA. Now, in figure 1 shows a plot of GDP per capita of France, UK, USA and Spain using data from the Madison project.

GDP per capita $G-K 1990. France, UK, USA and Spain. 1850 – 2012

The graph suggests that Spain was always poorer than France. But this could change if the chosen method to split NA is the retropolation approach. Probably we need a graph just with France to appreciate the differences. Please see figure 2:

Figure 2 now suggests an apparent convergence of Spain with France in the period 1957 to 2006. The average growth rate for Spain in this period was almost 3,5% p.a. and in the case of France average growth shrinks to 2,2% p.a. Anecdotal observation as well as documented evidence around Spainish levels of inequality and poverty make this result hard to believe. Prados de la Escosura goes on to help us ascertain this differences in measurement graphically by brining together estimates of retropolation and interpolation approaches in a single graph (see figure 3 below):

In summary, this paper by Prados de la Escosura is a great contribution to the debate on long run economic performance. It poises interesting challenges scholars researching long-term growth and dealing with NA and international comparisons. The benchmarks and split between different sources is always a source of problems to international comparative studies but also to long-term study of the same country. Moving beyond the technical implications discussed by Prados de la Escosura in this paper, economic history research could benefit from a debate to look for alternative measures or proxies for long-run growth, because GDP as the main source of international comparisons is becoming “dated” and ineffective to deal with new research in inequality, genuine savings Genuine Savings, energy consumption, complexity and gaps between development and developed countries to name but a few.