Archivio della Categoria '* TECNOLOGIA, INFORMATICA, ENERGIA'

Friday, 26 October 2018

The Age of Social Transformation

by Peter F. Drucker

A survey of the epoch that began early in this century, and an analysis of its latest manifestations: an economic order in which knowledge, not labor or raw material or capital, is the key resource; a social order in which inequality based on knowledge is a major challenge; and a polity in which government cannot be looked to for solving social and economic problems.

No century in recorded history has experienced so many social transformations and such radical ones as the twentieth century. They, I submit, may turn out to be the most significant events of this, our century, and its lasting legacy. In the developed free-market countries—which contain less than a fifth of the earth's population but are a model for the rest—work and work force, society and polity, are all, in the last decade of this century, qualitatively and quantitatively different not only from what they were in the first years of this century but also from what has existed at any other time in history: in their configurations, in their processes, in their problems, and in their structures.

Far smaller and far slower social changes in earlier periods triggered civil wars, rebellions, and violent intellectual and spiritual crises. The extreme social transformations of this century have caused hardly any stir. They have proceeded with a minimum of friction, with a minimum of upheavals, and, indeed, with a minimum of attention from scholars, politicians, the press, and the public. To be sure, this century of ours may well have been the cruelest and most violent in history, with its world and civil wars, its mass tortures, ethnic cleansings, genocides, and holocausts. But all these killings, all these horrors inflicted on the human race by this century's murderous "charismatics," hindsight clearly shows, were just that: senseless killings, senseless horrors, "sound and fury, signifying nothing." Hitler, Stalin, and Mao, the three evil geniuses of this century, destroyed. They created nothing.

Indeed, if this century proves one thing, it is the futility of politics. Even the most dogmatic believer in historical determinism would have a hard time explaining the social transformations of this century as caused by the headline-making political events, or the headline-making political events as caused by the social transformations. But it is the social transformations, like ocean currents deep below the hurricane-tormented surface of the sea, that have had the lasting, indeed the permanent, effect. They, rather than all the violence of the political surface, have transformed not only the society but also the economy, the community, and the polity we live in. The age of social transformation will not come to an end with the year 2000—it will not even have peaked by then.

The Social Structure Transformed

Before the First World War, farmers composed the largest single group in every country. They no longer made up the population everywhere, as they had from the dawn of history to the end of the Napoleonic Wars, a hundred years earlier. But farmers still made up a near-majority in every developed country except England and Belgium—in Germany, France, Japan, the United States—and, of course, in all underdeveloped countries, too. On the eve of the First World War it was considered a self-evident axiom that developed countries—the United States and Canada being the only exceptions—would increasingly have to rely on food imports from nonindustrial, nondeveloped areas.

Today only Japan among major developed free-market countries is a heavy importer of food. (It is one unnecessarily, for its weakness as a food producer is largely the result of an obsolete rice-subsidy policy that prevents the country from developing a modern, productive agriculture.) And in all developed free-market countries, including Japan, farmers today are at most five percent of the population and work force—that is, one tenth of the proportion of eighty years ago. Actually, productive farmers make up less than half of the total farm population, or no more than two percent of the work force. And these agricultural producers are not "farmers" in most senses of the word; they are "agribusiness," which is arguably the most capital-intensive, most technology-intensive, and most information-intensive industry around. Traditional farmers are close to extinction even in Japan. And those that remain have become a protected species kept alive only by enormous subsidies.

The second-largest group in the population and work force of every developed country around 1900 was composed of live-in servants. They were considered as much a law of nature as farmers were. Census categories of the time defined a "lower middle class" household as one that employed fewer than three servants, and as a percentage of the work force domestics grew steadily up to the First World War. Eighty years later live-in domestic servants scarcely exist in developed countries. Few people born since the Second World War—that is, few people under fifty—have even seen any except on the stage or in old movies.

In the developed society of 2000 farmers are little but objects of nostalgia, and domestic servants are not even that.

Yet these enormous transformations in all developed free-market countries were accomplished without civil war and, in fact, in almost total silence. Only now that their farm population has shrunk to near zero do the totally urban French loudly assert that theirs should be a "rural country" with a "rural civilization."

The Rise and Fall of the Blue-Collar Worker

One reason why the transformations caused so little stir (indeed, the main reason) was that by 1900 a new class, the blue-collar worker in manufacturing industry—Marx's "proletarian"—had become socially dominant. Farmers were loudly adjured to "raise less corn and more hell," but they paid little attention. Domestic servants were clearly the most exploited class around. But when people before the First World War talked or wrote about the "social question," they meant blue-collar industrial workers. Blue-collar industrial workers were still a fairly small minority of the population and work force—right up to 1914 they made up an eighth or a sixth of the total at most—and were still vastly outnumbered by the traditional lower classes of farmers and domestic servants. But early twentieth-century society was obsessed with blue-collar workers, fixated on them, bewitched by them.

Farmers and domestic servants were everywhere. But as classes, they were invisible. Domestic servants lived and worked inside individual homes or on individual farms in small and isolated groups of two or three. Farmers, too, were dispersed. More important, these traditional lower classes were not organized. Indeed, they could not be organized. Slaves employed in mining or in producing goods had revolted frequently in the ancient world—though always unsuccessfully. But there is no mention in any book I ever read of a single demonstration or a single protest march by domestic servants in any place, at any time. There have been peasant revolts galore. But except for two Chinese revolts in the nineteenth century—the Taiping Rebellion, in midcentury, and the Boxer Rebellion, at the century's end, both of which lasted for years and came close to overturning the regime—all peasant rebellions in history have fizzled out after a few bloody weeks. Peasants, history shows, are very hard to organize and do not stay organized—which is why they earned Marx's contempt.

The new class, industrial workers, was extremely visible. This is what made these workers a "class." They lived perforce in dense population clusters and in cities—in St. Denis, outside Paris; in Berlin's Wedding and Vienna's Ottakring; in the textile towns of Lancashire; in the steel towns of America's Monongahela Valley; and in Japan's Kobe. And they soon proved eminently organizable, with the first strikes occurring almost as soon as there were factory workers. Charles Dickens's harrowing tale of murderous labor conflict, Hard Times, was published in 1854, only six years after Marx and Engels wrote The Communist Manifesto.

By 1900 it had become quite clear that industrial workers would not become the majority, as Marx had predicted only a few decades earlier. They therefore would not overwhelm the capitalists by their sheer numbers. Yet the most influential radical writer of the period before the First World War, the French ex-Marxist and revolutionary syndicalist Georges Sorel, found widespread acceptance for his 1906 thesis that the proletarians would overturn the existing order and take power by their organization and in and through the violence of the general strike. It was not only Lenin who made Sorel's thesis the foundation of his revision of Marxism and built around it his strategy in 1917 and 1918. Both Mussolini and Hitler—and Mao, ten years later—built their strategies on Sorel's thesis. Mao's "power grows out of the barrel of a gun" is almost a direct quote from Sorel. The industrial worker became the "social question" of 1900 because he was the first lower class in history that could be organized and could stay organized.

No class in history has ever risen faster than the blue-collar worker. And no class in history has ever fallen faster.

In 1883, the year of Marx's death, "proletarians" were still a minority not just of the population but also of industrial workers. The majority in industry were then skilled workers employed in small craft shops, each containing twenty or thirty workers at most. Of the anti-heroes of the nineteenth century's best "proletarian" novel, The Princess Casamassima, by Henry James—published in 1886 (and surely only Henry James could have given such a title to a story of working-class terrorists!)—one is a highly skilled bookbinder, the other an equally skilled pharmacist. By 1900 "industrial worker" had become synonymous with "machine operator" and implied employment in a factory along with hundreds if not thousands of people. These factory workers were indeed Marx's proletarians—without social position, without political power, without economic or purchasing power.

The workers of 1900—and even of 1913—received no pensions, no paid vacation, no overtime pay, no extra pay for Sunday or night work, no health or old-age insurance (except in Germany), no unemployment compensation (except, after 1911, in Britain); they had no job security whatever. Fifty years later, in the 1950s, industrial workers had become the largest single group in every developed country, and unionized industrial workers in mass-production industry (which was then dominant everywhere) had attained upper-middle-class income levels. They had extensive job security, pensions, long paid vacations, and comprehensive unemployment insurance or "lifetime employment." Above all, they had achieved political power. In Britain the labor unions were considered to be the "real government," with greater power than the Prime Minister and Parliament, and much the same was true elsewhere. In the United States, too—as in Germany, France, and Italy—the labor unions had emerged as the country's most powerful and best organized political force. And in Japan they had come close, in the Toyota and Nissan strikes of the late forties and early fifties, to overturning the system and taking power themselves.

Thirty-five years later, in 1990, industrial workers and their unions were in retreat. They had become marginal in numbers. Whereas industrial workers who make or move things had accounted for two fifths of the American work force in the 1950s, they accounted for less than one fifth in the early 1990s—that is, for no more than they had accounted for in 1900, when their meteoric rise began. In the other developed free-market countries the decline was slower at first, but after 1980 it began to accelerate everywhere. By the year 2000 or 2010, in every developed free market country, industrial workers will account for no more than an eighth of the work force. Union power has been declining just as fast.

Unlike domestic servants, industrial workers will not disappear—any more than agricultural producers have disappeared or will disappear. But just as the traditional small farmer has become a recipient of subsidies rather than a producer, so will the traditional industrial worker become an auxiliary employee. His place is already being taken by the "technologist"—someone who works both with hands and with theoretical knowledge. (Examples are computer technicians, x-ray technicians, physical therapists, medical-lab technicians, pulmonary technicians, and so on, who together have made up the fastest-growing group in the U.S. labor force since 1980.) And instead of a class—a coherent, recognizable, defined, and self-conscious group—industrial workers may soon be just another "pressure group."

Chroniclers of the rise of the industrial worker tend to highlight the violent episodes—especially the clashes between strikers and the police, as in America's Pullman strike. The reason is probably that the theoreticians and propagandists of socialism, anarchism, and communism—beginning with Marx and continuing to Herbert Marcuse in the 1960s—incessantly wrote and talked of "revolution" and "violence." Actually, the rise of the industrial worker was remarkably nonviolent. The enormous violence of this century—the world wars, ethnic cleansings, and so on—was all violence from above rather than violence from below; and it was unconnected with the transformations of society, whether the dwindling of farmers, the disappearance of domestic servants, or the rise of the industrial worker. In fact, no one even tries anymore to explain these great convulsions as part of "the crisis of capitalism," as was standard Marxist rhetoric only thirty years ago.

Contrary to Marxist and syndicalist predictions, the rise of the industrial worker did not destabilize society. Instead it has emerged as the century's most stabilizing social development. It explains why the disappearance of the farmer and the domestic servant produced no social crises. Both the flight from the land and the flight from domestic service were voluntary. Farmers and maids were not "pushed off" or "displaced." They went into industrial employment as fast as they could. Industrial jobs required no skills they did not already possess, and no additional knowledge. In fact, farmers on the whole had a good deal more skill than was required to be a machine operator in a mass-production plant—and so did many domestic servants. To be sure, industrial work paid poorly until the First World War. But it paid better than farming or household work. Industrial workers in the United States until 1913—and in some countries, including Japan, until the Second World War—worked long hours. But they worked shorter hours than farmers and domestic servants. What's more, they worked specified hours: the rest of the day was their own, which was true neither of work on the farm nor of domestic work.

The history books record the squalor of early industry, the poverty of the industrial workers, and their exploitation. Workers did indeed live in squalor and poverty, and they were exploited. But they lived better than those on a farm or in a household, and were generally treated better.

Proof of this is that infant mortality dropped immediately when farmers and domestic servants moved into industrial work. Historically, cities had never reproduced themselves. They had depended for their perpetuation on constant new recruits from the countryside. This was still true in the mid-nineteenth century. But with the spread of factory employment the city became the center of population growth. In part this was a result of new public-health measures: purification of water, collection and treatment of wastes, quarantine against epidemics, inoculation against disease. These measures—and they were effective mostly in the city—counteracted, or at least contained, the hazards of crowding that had made the traditional city a breeding ground for pestilence. But the largest single factor in the exponential drop in infant mortality as industrialization spread was surely the improvement in living conditions brought about by the factory. Housing and nutrition became better, and hard work and accidents came to take less of a toll. The drop in infant mortality—and with it the explosive growth in population—correlates with only one development: industrialization. The early factory was indeed the "Satanic Mill" of William Blake's great poem. But the countryside was not "England's green and pleasant Land" of which Blake sang; it was a picturesque but even more satanic slum.

For farmers and domestic servants, industrial work was an opportunity. It was, in fact, the first opportunity that social history had given them to better themselves substantially without having to emigrate. In the developed free-market countries over the past 100 or 150 years every generation has been able to expect to do substantially better than the generation preceding it. The main reason has been that farmers and domestic servants could and did become industrial workers.

Because industrial workers are concentrated in groups, systematic work on their productivity was possible. Beginning in 1881, two years before Marx's death, the systematic study of work, tasks, and tools raised the productivity of manual work in making and moving things by three to four percent compound on average per year—for a fiftyfold increase in output per worker over 110 years. On this rest all the economic and social gains of the past century. And contrary to what "everybody knew" in the nineteenth century—not only Marx but all the conservatives as well, such as J. P. Morgan, Bismarck, and Disraeli—practically all these gains have accrued to the industrial worker, half of them in the form of sharply reduced working hours (with the cuts ranging from 40 percent in Japan to 50 percent in Germany), and half of them in the form of a twenty-five fold increase in the real wages of industrial workers who make or move things.

There were thus very good reasons why the rise of the industrial worker was peaceful rather than violent, let alone revolutionary. But what explains the fact that the fall of the industrial worker has been equally peaceful and almost entirely free of social protest, of upheaval, of serious dislocation, at least in the United States?

The Rise of the Knowledge Worker

The rise of the class succeeding industrial workers is not an opportunity for industrial workers. It is a challenge. The newly emerging dominant group is "knowledge workers." The very term was unknown forty years ago. (I coined it in a 1959 book, Landmarks of Tomorrow.) By the end of this century knowledge workers will make up a third or more of the work force in the United States—as large a proportion as manufacturing workers ever made up, except in wartime. The majority of them will be paid at least as well as, or better than, manufacturing workers ever were. And the new jobs offer much greater opportunities.

But—and this is a big but—the great majority of the new jobs require qualifications the industrial worker does not possess and is poorly equipped to acquire. They require a good deal of formal education and the ability to acquire and to apply theoretical and analytical knowledge. They require a different approach to work and a different mind-set. Above all, they require a habit of continuous learning. Displaced industrial workers thus cannot simply move into knowledge work or services the way displaced farmers and domestic workers moved into industrial work. At the very least they have to change their basic attitudes, values, and beliefs.

In the closing decades of this century the industrial work force has shrunk faster and further in the United States than in any other developed country—while industrial production has grown faster than in any other developed country except Japan.

The shift has aggravated America's oldest and least tractable problem: the position of blacks. In the fifty years since the Second World War the economic position of African-Americans in America has improved faster than that of any other group in American social history—or in the social history of any country. Three fifths of America's blacks rose into middle class incomes; before the Second World War the figure was one twentieth. But half that group rose into middle-class incomes and not into middle class jobs. Since the Second World War more and more blacks have moved into blue-collar unionized mass-production industry—that is, into jobs paying middle-class and upper-middle-class wages while requiring neither education nor skill. These are precisely the jobs, however, that are disappearing the fastest. What is amazing is not that so many blacks did not acquire an education but that so many did. The economically rational thing for a young black in postwar America was not to stay in school and learn; it was to leave school as early as possible and get one of the plentiful mass-production jobs. As a result, the fall of the industrial worker has hit America's blacks disproportionately hard—quantitatively, but qualitatively even more. It has blunted what was the most potent role model in the black community in America: the well-paid industrial worker with job security, health insurance, and a guaranteed retirement pension—yet possessing neither skill nor much education.

But, of course, blacks are a minority of the population and work force in the United States. For the overwhelming majority—whites, but also Latinos and Asians—the fall of the industrial worker has caused amazingly little disruption and nothing that could be called an upheaval. Even in communities that were once totally dependent on mass-production plants that have gone out of business or have drastically slashed employment (steel cities in western Pennsylvania and eastern Ohio, for instance, or automobile cities like Detroit and Flint, Michigan), unemployment rates for nonblack adults fell within a few short years to levels barely higher than the U.S. average—and that means to levels barely higher than the U.S. "full-employment" rate. Even in these communities there has been no radicalization of America's blue-collar workers.

The only explanation is that for the nonblack blue-collar community the development came as no surprise, however unwelcome, painful, and threatening it may have been to individual workers and their families. Psychologically—but in terms of values, perhaps, rather than in terms of emotions—America's industrial workers must have been prepared to accept as right and proper the shift to jobs that require formal education and that pay for knowledge rather than for manual work, whether skilled or unskilled.

In the United States the shift had by 1990 or so largely been accomplished. But so far it has occurred only in the United States. In the other developed free-market countries, in western and northern Europe and in Japan, it is just beginning in the 1990s. It is, however, certain to proceed rapidly in these countries from now on, perhaps faster than it originally did in the United States. The fall of the industrial worker in the developed free-market countries will also have a major impact outside the developed world. Developing countries can no longer expect to base their development on their comparative labor advantage—that is, on cheap industrial labor.

It is widely believed, especially by labor-union officials, that the fall of the blue-collar industrial worker in the developed countries was largely, if not entirely, caused by moving production "offshore" to countries with abundant supplies of unskilled labor and low wage rates. But this is not true.

There was something to the belief thirty years ago. Japan, Taiwan, and, later, South Korea did indeed (as explained in some detail in my 1993 book Post-Capitalist Society) gain their initial advantage in the world market by combining, almost overnight, America's invention of training for full productivity with wage costs that were still those of a pre-industrial country. But this technique has not worked at all since 1970 or 1975.

In the 1990s only an insignificant percentage of manufactured goods imported into the United States are produced abroad because of low labor costs. While total imports in 1990 accounted for about 12 percent of the U.S. gross personal income, imports from countries with significantly lower wage costs accounted for less than three percent—and only half of those were imports of manufactured products. Practically none of the decline in American manufacturing employment from some 30 or 35 percent of the work force to 15 or 18 percent can therefore be attributed to moving work to low-wage countries. The main competition for American manufacturing industry—for instance, in automobiles, in steel, and in machine tools—has come from countries such as Japan and Germany, where wage costs have long been equal to, if not higher than, those in the United States. The comparative advantage that now counts is in the application of knowledge—for example, in Japan's total quality management, lean manufacturing processes, just-in-time delivery, and price-based costing, or in the customer service offered by medium-sized German or Swiss engineering companies. This means, however, that developing countries can no longer expect to base their development on low wages. They, too, must learn to base it on applying knowledge—just at the time when most of them (China, India, and much of Latin America, let alone black Africa) will have to find jobs for millions of uneducated and unskilled young people who are qualified for little except yesterday's blue-collar industrial jobs.

But for the developed countries, too, the shift to knowledge-based work poses enormous social challenges. Despite the factory, industrial society was still essentially a traditional society in its basic social relationships of production. But the emerging society, the one based on knowledge and knowledge workers, is not. It is the first society in which ordinary people—and that means most people—do not earn their daily bread by the sweat of their brow. It is the first society in which "honest work" does not mean a callused hand. It is also the first society in which not everybody does the same work, as was the case when the huge majority were farmers or, as seemed likely only forty or thirty years ago, were going to be machine operators.

This is far more than a social change. It is a change in the human condition.

What it means—what are the values, the commitments, the problems, of the new society—we do not know.

But we do know that much will be different.

The Emerging Knowledge Society

Knowledge workers will not be the majority in the knowledge society, but in many if not most developed societies they will be the largest single population and work-force group. And even where outnumbered by other groups, knowledge workers will give the emerging knowledge society its character, its leadership, its social profile. They may not be the ruling class of the knowledge society, but they are already its leading class. And in their characteristics, social position, values, and expectations, they differ fundamentally from any group in history that has ever occupied the leading position.

In the first place, knowledge workers gain access to jobs and social position through formal education. A great deal of knowledge work requires highly developed manual skill and involves substantial work with one's hands. An extreme example is neurosurgery. The neurosurgeon's performance capacity rests on formal education and theoretical knowledge. An absence of manual skill disqualifies one for work as a neurosurgeon. But manual skill alone, no matter how advanced, will never enable anyone to be a neurosurgeon. The education that is required for neurosurgery and other kinds of knowledge work can be acquired only through formal schooling. It cannot be acquired through apprenticeship.

Knowledge work varies tremendously in the amount and kind of formal knowledge required. Some jobs have fairly low requirements, and others require the kind of knowledge the neurosurgeon possesses. But even if the knowledge itself is quite primitive, only formal education can provide it.

Education will become the center of the knowledge society, and the school its key institution. What knowledge must everybody have? What is "quality" in learning and teaching? These will of necessity become central concerns of the knowledge society, and central political issues. In fact, the acquisition and distribution of formal knowledge may come to occupy the place in the politics of the knowledge society which the acquisition and distribution of property and income have occupied in our politics over the two or three centuries that we have come to call the Age of Capitalism.

In the knowledge society, clearly, more and more knowledge, and especially advanced knowledge, will be acquired well past the age of formal schooling and increasingly, perhaps, through educational processes that do not center on the traditional school. But at the same time, the performance of the schools and the basic values of the schools will be of increasing concern to society as a whole, rather than being considered professional matters that can safely be left to "educators."

We can also predict with confidence that we will redefine what it means to be an educated person. Traditionally, and especially during the past 300 years (perhaps since 1700 or so, at least in the West, and since about that time in Japan as well), an educated person was somebody who had a prescribed stock of formal knowledge. The Germans called this knowledge allgemeine Bildung, and the English (and, following them, the nineteenth century Americans) called it the liberal arts. Increasingly, an educated person will be somebody who has learned how to learn, and who continues learning, especially by formal education, throughout his or her lifetime.

There are obvious dangers to this. For instance, society could easily degenerate into emphasizing formal degrees rather than performance capacity. It could fall prey to sterile Confucian mandarins—a danger to which the American university is singularly susceptible. On the other hand, it could overvalue immediately usable, "practical" knowledge and underrate the importance of fundamentals, and of wisdom altogether.

A society in which knowledge workers dominate is under threat from a new class conflict: between the large minority of knowledge workers and the majority of people, who will make their living traditionally, either by manual work, whether skilled or unskilled, or by work in services, whether skilled or unskilled. The productivity of knowledge work—still abysmally low—will become the economic challenge of the knowledge society. On it will depend the competitive position of every single country, every single industry, every single institution within society. The productivity of the nonknowledge, services worker will become the social challenge of the knowledge society. On it will depend the ability of the knowledge society to give decent incomes, and with them dignity and status, to non-knowledge workers.

No society in history has faced these challenges. But equally new are the opportunities of the knowledge society. In the knowledge society, for the first time in history, the possibility of leadership will be open to all. Also, the possibility of acquiring knowledge will no longer depend on obtaining a prescribed education at a given age. Learning will become the tool of the individual—available to him or her at any age—if only because so much skill and knowledge can be acquired by means of the new learning technologies.

Another implication is that how well an individual, an organization, an industry, a country, does in acquiring and applying knowledge will become the key competitive factor. The knowledge society will inevitably become far more competitive than any society we have yet known—for the simple reason that with knowledge being universally accessible, there will be no excuses for nonperformance. There will be no "poor" countries. There will only be ignorant countries. And the same will be true for companies, industries, and organizations of all kinds. It will be true for individuals, too. In fact, developed societies have already become infinitely more competitive for individuals than were the societies of the beginning of this century, let alone earlier ones.

I have been speaking of knowledge. But a more accurate term is "knowledges," because the knowledge of the knowledge society will be fundamentally different from what was considered knowledge in earlier societies—and, in fact, from what is still widely considered knowledge. The knowledge of the German allgemeine Bildung or of the Anglo-American liberal arts had little to do with one's life's work. It focused on the person and the person's development, rather than on any application—if, indeed, it did not, like the nineteenth-century liberal arts, pride itself on having no utility whatever. In the knowledge society knowledge for the most part exists only in application. Nothing the x-ray technician needs to know can be applied to market research, for instance, or to teaching medieval history. The central work force in the knowledge society will therefore consist of highly specialized people. In fact, it is a mistake to speak of "generalists." What we will increasingly mean by that term is people who have learned how to acquire additional specialties rapidly in order to move from one kind of job to another—for example, from market research into management, or from nursing into hospital administration. But "generalists" in the sense in which we used to talk of them are coming to be seen as dilettantes rather than educated people.

This, too, is new. Historically, workers were generalists. They did whatever had to be done—on the farm, in the household, in the craftsman's shop. This was also true of industrial workers. But knowledge workers, whether their knowledge is primitive or advanced, whether there is a little of it or a great deal, will by definition be specialized. Applied knowledge is effective only when it is specialized. Indeed, the more highly specialized, the more effective it is. This goes for technicians who service computers, x-ray machines, or the engines of fighter planes. But it applies equally to work that requires the most advanced knowledge, whether research in genetics or research in astrophysics or putting on the first performance of a new opera.

Again, the shift from knowledge to knowledges offers tremendous opportunities to the individual. It makes possible a career as a knowledge worker. But it also presents a great many new problems and challenges. It demands for the first time in history that people with knowledge take responsibility for making themselves understood by people who do not have the same knowledge base.

How Knowledges Work

That knowledge in the knowledge society has to be highly specialized to be productive implies two new requirements: that knowledge workers work in teams, and that if knowledge workers are not employees, they must at least be affiliated with an organization.

There is a great deal of talk these days about "teams" and "teamwork." Most of it starts out with the wrong assumption—namely, that we have never before worked in teams. Actually people have always worked in teams; very few people ever could work effectively by themselves. The farmer had to have a wife, and the farm wife had to have a husband. The two worked as a team. And both worked as a team with their employees, the hired hands. The craftsman also had to have a wife, with whom he worked as a team—he took care of the craft work, and she took care of the customers, the apprentices, and the business altogether. And both worked as a team with journeymen and apprentices. Much discussion today assumes that there is only one kind of team. Actually there are quite a few. But until now the emphasis has been on the individual worker and not on the team. With knowledge work growing increasingly effective as it is increasingly specialized, teams become the work unit rather than the individual himself.

The team that is being touted now—I call it the "jazz combo" team—is only one kind of team. It is actually the most difficult kind of team both to assemble and to make work effectively, and the kind that requires the longest time to gain performance capacity. We will have to learn to use different kinds of teams for different purposes. We will have to learn to understand teams—and this is something to which, so far, very little attention has been paid. The understanding of teams, the performance capacities of different kinds of teams, their strengths and limitations, and the trade-offs between various kinds of teams will thus become central concerns in the management of people.

Equally important is the second implication of the fact that knowledge workers are of necessity specialists: the need for them to work as members of an organization. Only the organization can provide the basic continuity that knowledge workers need in order to be effective. Only the organization can convert the specialized knowledge of the knowledge worker into performance.

By itself, specialized knowledge does not yield performance. The surgeon is not effective unless there is a diagnosis—which, by and large, is not the surgeon's task and not even within the surgeon's competence. As a loner in his or her research and writing, the historian can be very effective. But to educate students, a great many other specialists have to contribute—people whose specialty may be literature, or mathematics, or other areas of history. And this requires that the specialist have access to an organization.

This access may be as a consultant, or it may be as a provider of specialized services. But for the majority of knowledge workers it will be as employees, full-time or part-time, of an organization, such as a government agency, a hospital, a university, a business, or a labor union. In the knowledge society it is not the individual who performs. The individual is a cost center rather than a performance center. It is the organization that performs.

What is an Employee?

Most knowledge workers will spend most if not all of their working lives as "employees." But the meaning of the term will be different from what it has been traditionally—and not only in English but in German, Spanish, and Japanese as well.

Individually, knowledge workers are dependent on the job. They receive a wage or salary. They have been hired and can be fired. Legally each is an employee. But collectively they are the capitalists; increasingly, through their pension funds and other savings, the employees own the means of production. In traditional economics—and by no means only in Marxist economics—there is a sharp distinction between the "wage fund," all of which goes into consumption, and the "capital fund," or that part of the total income stream that is available for investment. And most social theory of industrial society is based, one way or another, on the relationship between the two, whether in conflict or in necessary and beneficial cooperation and balance. In the knowledge society the two merge. The pension fund is "deferred wages," and as such is a wage fund. But it is also increasingly the main source of capital for the knowledge society.

Perhaps more important, in the knowledge society the employees—that is, knowledge workers—own the tools of production. Marx's great insight was that the factory worker does not and cannot own the tools of production, and therefore is "alienated." There was no way, Marx pointed out, for the worker to own the steam engine and to be able to take it with him when moving from one job to another. The capitalist had to own the steam engine and to control it. Increasingly, the true investment in the knowledge society is not in machines and tools but in the knowledge of the knowledge worker. Without that knowledge the machines, no matter how advanced and sophisticated, are unproductive.

The market researcher needs a computer. But increasingly this is the researcher's own personal computer, and it goes along wherever he or she goes. The true "capital equipment" of market research is the knowledge of markets, of statistics, and of the application of market research to business strategy, which is lodged between the researcher's ears and is his or her exclusive and inalienable property. The surgeon needs the operating room of the hospital and all its expensive capital equipment. But the surgeon's true capital investment is twelve or fifteen years of training and the resulting knowledge, which the surgeon takes from one hospital to the next. Without that knowledge the hospital's expensive operating rooms are so much waste and scrap.

This is true whether the knowledge worker commands advanced knowledge, like a surgeon, or simple and fairly elementary knowledge, like a junior accountant. In either case it is the knowledge investment that determines whether the employee is productive or not, more than the tools, machines, and capital furnished by an organization. The industrial worker needed the capitalist infinitely more than the capitalist needed the industrial worker—the basis for Marx's assertion that there would always be a surplus of industrial workers, an "industrial reserve army," that would make sure that wages could not possibly rise above the subsistence level (probably Marx's most egregious error). In the knowledge society the most probable assumption for organizations—and certainly the assumption on which they have to conduct their affairs—is that they need knowledge workers far more than knowledge workers need them.

There was endless debate in the Middle Ages about the hierarchy of knowledges, with philosophy claiming to be the "queen." We long ago gave up that fruitless argument. There is no higher or lower knowledge. When the patient's complaint is an ingrown toenail, the podiatrist's knowledge, not that of the brain surgeon, controls—even though the brain surgeon has received many more years of training and commands a much larger fee. And if an executive is posted to a foreign country, the knowledge he or she needs, and in a hurry, is fluency in a foreign language—something every native of that country has mastered by age three, without any great investment. The knowledge of the knowledge society, precisely because it is knowledge only when applied in action, derives its rank and standing from the situation. In other words, what is knowledge in one situation, such as fluency in Korean for the American executive posted to Seoul, is only information, and not very relevant information at that, when the same executive a few years later has to think through his company's market strategy for Korea. This, too, is new. Knowledges were always seen as fixed stars, so to speak, each occupying its own position in the universe of knowledge. In the knowledge society knowledges are tools, and as such are dependent for their importance and position on the task to be performed.

Management in the Knowledge Society

One additional conclusion: Because the knowledge society perforce has to be a society of organizations, its central and distinctive organ is management.

When our society began to talk of management, the term meant "business management"—because large-scale business was the first of the new organizations to become visible. But we have learned in this past half century that management is the distinctive organ of all organizations. All of them require management, whether they use the term or not. All managers do the same things, whatever the purpose of their organization. All of them have to bring people—each possessing different knowledge- together for joint performance. All of them have to make human strengths productive in performance and human weaknesses irrelevant. All of them have to think through what results are wanted in the organization—and have then to define objectives. All of them are responsible for thinking through what I call the theory of the business—that is, the assumptions on which the organization bases its performance and actions, and the assumptions that the organization has made in deciding what not to do. All of them must think through strategies—that is, the means through which the goals of the organization become performance. All of them have to define the values of the organization, its system of rewards and punishments, its spirit and its culture. In all organizations managers need both the knowledge of management as work and discipline and the knowledge and understanding of the organization itself—its purposes, its values, its environment and markets, its core competencies.

Management as a practice is very old. The most successful executive in all history was surely that Egyptian who, 4,500 years or more ago, first conceived the pyramid, without any precedent, designed it, and built it, and did so in an astonishingly short time. That first pyramid still stands. But as a discipline management is barely fifty years old. It was first dimly perceived around the time of the First World War. It did not emerge until the Second World War, and then did so primarily in the United States. Since then it has been the fastest-growing new function, and the study of it the fastest-growing new discipline. No function in history has emerged as quickly as has management in the past fifty or sixty years, and surely none has had such worldwide sweep in such a short period.

Management is still taught in most business schools as a bundle of techniques, such as budgeting and personnel relations. To be sure, management, like any other work, has its own tools and its own techniques. But just as the essence of medicine is not urinalysis (important though that is), the essence of management is not techniques and procedures. The essence of management is to make knowledges productive. Management, in other words, is a social function. And in its practice management is truly a liberal art.

The Social Sector

The old communities—family, village, parish, and so on—have all but disappeared in the knowledge society. Their place has largely been taken by the new unit of social integration, the organization. Where community was fate, organization is voluntary membership. Where community claimed the entire person, organization is a means to a person's ends, a tool. For 200 years a hot debate has been raging, especially in the West: are communities "organic" or are they simply extensions of the people of which they are made? Nobody would claim that the new organization is "organic." It is clearly an artifact, a creation of man, a social technology.

But who, then, does the community tasks? Two hundred years ago whatever social tasks were being done were done in all societies by a local community. Very few if any of these tasks are being done by the old communities anymore. Nor would they be capable of doing them, considering that they no longer have control of their members or even a firm hold over them. People no longer stay where they were born, either in terms of geography or in terms of social position and status. By definition, a knowledge society is a society of mobility. And all the social functions of the old communities, whether performed well or poorly (and most were performed very poorly indeed), presupposed that the individual and the family would stay put. But the essence of a knowledge society is mobility in terms of where one lives, mobility in terms of what one does, mobility in terms of one's affiliations. People no longer have roots. People no longer have a neighborhood that controls what their home is like, what they do, and, indeed, what their problems are allowed to be. The knowledge society is a society in which many more people than ever before can be successful. But it is therefore, by definition, also a society in which many more people than ever before can fail, or at least come in second. And if only because the application of knowledge to work has made developed societies so much richer than any earlier society could even dream of becoming, the failures, whether poor people or alcoholics, battered women or juvenile delinquents, are seen as failures of society.

Who, then, takes care of the social tasks in the knowledge society? We cannot ignore them. But the traditional community is incapable of tackling them.

Two answers have emerged in the past century or so—a majority answer and a dissenting opinion. Both have proved to be wrong.

The majority answer goes back more than a hundred years, to the 1880s, when Bismarck's Germany took the first faltering steps toward the welfare state. The answer: the problems of the social sector can, should, and must be solved by government. This is still probably the answer that most people accept, especially in the developed countries of the West—even though most people probably no longer fully believe it. But it has been totally disproved. Modern government, especially since the Second World War, has everywhere become a huge welfare bureaucracy. And the bulk of the budget in every developed country today is devoted to Entitlements—to payments for all kinds of social services. Yet in every developed country society is becoming sicker rather than healthier, and social problems are multiplying. Government has a big role to play in social tasks—the role of policymaker, of standard setter, and, to a substantial extent, of paymaster. But as the agency to run social services, it has proved almost totally incompetent.

In my The Future of Industrial Man (1942), I formulated a dissenting opinion. I argued then that the new organization—and fifty years ago that meant the large business enterprise—would have to be the community in which the individual would find status and function, with the workplace community becoming the one in and through which social tasks would be organized. In Japan (though quite independently and without any debt to me) the large employer—government agency or business—has indeed increasingly attempted to serve as a community for its employees. Lifetime employment is only one affirmation of this. Company housing, company health plans, company vacations, and so on all emphasize for the Japanese employee that the employer, and especially the big corporation, is the community and the successor to yesterday's village—even to yesterday's family. This, however, has not worked either.

There is need, especially in the West, to bring the employee increasingly into the government of the workplace community. What is now called empowerment is very similar to the things I talked about fifty years ago. But it does not create a community. Nor does it create the structure through which the social tasks of the knowledge society can be tackled. In fact, practically all these tasks—whether education or health care; the anomies and diseases of a developed and, especially, a rich society, such as alcohol and drug abuse; or the problems of incompetence and irresponsibility such as those of the underclass in the American city—lie outside the employing institution.

The right answer to the question Who takes care of the social challenges of the knowledge society? is neither the government nor the employing organization. The answer is a separate and new social sector.

It is less than fifty years, I believe, since we first talked in the United States of the two sectors of a modern society—the "public sector" (government) and the "private sector" (business). In the past twenty years the United States has begun to talk of a third sector, the "nonprofit sector"—those organizations that increasingly take care of the social challenges of a modern society.

In the United States, with its tradition of independent and competitive churches, such a sector has always existed. Even now churches are the largest single part of the social sector in the United States, receiving almost half the money given to charitable institutions, and about a third of the time volunteered by individuals. But the nonchurch part of the social sector has been the growth sector in the United States. In the early 1990s about a million organizations were registered in the United States as nonprofit or charitable organizations doing social-sector work. The overwhelming majority of these, some 70 percent, have come into existence in the past thirty years. And most are community services concerned with life on this earth rather than with the Kingdom of Heaven. Quite a few of the new organizations are, of course, religious in their orientation, but for the most part these are not churches. They are "parachurches" engaged in a specific social task, such as the rehabilitation of alcohol and drug addicts, the rehabilitation of criminals, or elementary school education. Even within the church segment of the social sector the organizations that have shown the capacity to grow are radically new. They are the "pastoral" churches, which focus on the spiritual needs of individuals, especially educated knowledge workers, and then put the spiritual energies of their members to work on the social challenges and social problems of the community—especially, of course, the urban community.

We still talk of these organizations as "nonprofits." But this is a legal term. It means nothing except that under American law these organizations do not pay taxes. Whether they are organized as nonprofit or not is actually irrelevant to their function and behavior. Many American hospitals since 1960 or 1970 have become "for-profits" and are organized in what legally are business corporations. They function in exactly the same way as traditional "nonprofit" hospitals. What matters is not the legal basis but that the social-sector institutions have a particular kind of purpose. Government demands compliance; it makes rules and enforces them. Business expects to be paid; it supplies. Social-sector institutions aim at changing the human being. The "product" of a school is the student who has learned something. The "product" of a hospital is a cured patient. The "product" of a church is a churchgoer whose life is being changed. The task of social-sector organizations is to create human health and well being.

Increasingly these organizations of the social sector serve a second and equally important purpose. They create citizenship. Modern society and modern polity have become so big and complex that citizenship—that is, responsible participation—is no longer possible. All we can do as citizens is to vote once every few years and to pay taxes all the time.

As a volunteer in a social-sector institution, the individual can again make a difference. In the United States, where there is a long volunteer tradition because of the old independence of the churches, almost every other adult in the 1990s is working at least three—and often five—hours a week as a volunteer in a social-sector organization. Britain is the only other country with something like this tradition, although it exists there to a much lesser extent (in part because the British welfare state is far more embracing, but in much larger part because it has an established church—paid for by the state and run as a civil service). Outside the English-speaking countries there is not much of a volunteer tradition. In fact, the modern state in Europe and Japan has been openly hostile to anything that smacks of volunteerism—most so in France and Japan. It is ancien regime and suspected of being fundamentally subversive.

But even in these countries things are changing, because the knowledge society needs the social sector, and the social sector needs the volunteer. But knowledge workers also need a sphere in which they can act as citizens and create a community. The workplace does not give it to them. Nothing has been disproved faster than the concept of the "organization man," which was widely accepted forty years ago. In fact, the more satisfying one's knowledge work is, the more one needs a separate sphere of community activity.

Many social-sector organizations will become partners with government—as is the case in a great many "privatizations," where, for instance, a city pays for street cleaning and an outside contractor does the work. In American education over the next twenty years there will be more and more government-paid vouchers that will enable parents to put their children into a variety of different schools, some public and tax supported, some private and largely dependent on the income from the vouchers. These social-sector organizations, although partners with government, also clearly compete with government. The relationship between the two has yet to be worked out—and there is practically no precedent for it.

What constitutes performance for social-sector organizations, and especially for those that, being nonprofit and charitable, do not have the discipline of a financial bottom line, has also yet to be worked out. We know that social-sector organizations need management. But what precisely management means for the social-sector organization is just beginning to be studied. With respect to the management of the nonprofit organization we are in many ways pretty much where we were fifty or sixty years ago with respect to the management of the business enterprise: the work is only beginning.

But one thing is already clear. The knowledge society has to be a society of three sectors: a public sector of government, a private sector of business, and a social sector. And I submit that it is becoming increasingly clear that through the social sector a modern developed society can again create responsible and achieving citizenship, and can again give individuals—especially knowledge workers—a sphere in which they can make a difference in society and re-create community.

The School as Society's Center

Knowledge has become the key resource, for a nation's military strength as well as for its economic strength. And this knowledge can be acquired only through schooling. It is not tied to any country. It is portable. It can be created everywhere, fast and cheaply. Finally, it is by definition changing. Knowledge as the key resource is fundamentally different from the traditional key resources of the economist—land, labor, and even capital.

That knowledge has become the key resource means that there is a world economy, and that the world economy, rather than the national economy, is in control. Every country, every industry, and every business will be in an increasingly competitive environment. Every country, every industry, and every business will, in its decisions, have to consider its competitive standing in the world economy and the competitiveness of its knowledge competencies.

Politics and policies still center on domestic issues in every country. Few if any politicians, journalists, or civil servants look beyond the boundaries of their own country when a new measure such as taxes, the regulation of business, or social spending is being discussed. Even in Germany—Europe's most export-conscious and export-dependent major country—this is true. Almost no one in the West asked in 1990 what the government's unbridled spending in the East would do to Germany's competitiveness.

This will no longer do. Every country and every industry will have to learn that the first question is not Is this measure desirable? but What will be the impact on the country's, or the industry's, competitive position in the world economy? We need to develop in politics something similar to the environmental-impact statement, which in the United States is now required for any government action affecting the quality of the environment: we need a competitive-impact statement. The impact on one's competitive position in the world economy should not necessarily be the main factor in a decision. But to make a decision without considering it has become irresponsible.

Altogether, the fact that knowledge has become the key resource means that the standing of a country in the world economy will increasingly determine its domestic prosperity. Since 1950 a country's ability to improve its position in the world economy has been the main and perhaps the sole determinant of performance in the domestic economy. Monetary and fiscal policies have been practically irrelevant, for better and, very largely, even for worse (with the single exception of governmental policies creating inflation, which very rapidly undermines both a country's competitive standing in the world economy and its domestic stability and ability to grow).

The primacy of foreign affairs is an old political precept going back in European politics to the seventeenth century. Since the Second World War it has also been accepted in American politics—though only grudgingly so, and only in emergencies. It has always meant that military security was to be given priority over domestic policies, and in all likelihood this is what it will continue to mean, Cold War or no Cold War. But the primacy of foreign affairs is now acquiring a different dimension. This is that a country's competitive position in the world economy—and also an industry's and an organization's—has to be the first consideration in its domestic policies and strategies. This holds true for a country that is only marginally involved in the world economy (should there still be such a one), and for a business that is only marginally involved in the world economy, and for a university that sees itself as totally domestic. Knowledge knows no boundaries. There is no domestic knowledge and no international knowledge. There is only knowledge. And with knowledge becoming the key resource, there is only a world economy, even though the individual organization in its daily activities operates within a national, regional, or even local setting.

How Can Government Function?

Social tasks are increasingly being done by individual organizations, each created for one, and only one, social task, whether education, health care, or street cleaning. Society, therefore, is rapidly becoming pluralist. Yet our social and political theories still assume that there are no power centers except government. To destroy or at least to render impotent all other power centers was, in fact, the thrust of Western history and Western politics for 500 years, from the fourteenth century on. This drive culminated in the eighteenth and nineteenth centuries, when, except in the United States, such early institutions as still survived—for example, the universities and the churches—became organs of the state, with their functionaries becoming civil servants. But then, beginning in the mid nineteenth century, new centers arose—the first one, the modern business enterprise, around 1870. And since then one new organization after another has come into being.

The new institutions—the labor union, the modern hospital, the mega church, the research university—of the society of organizations have no interest in public power. They do not want to be governments. But they demand—and, indeed, need—autonomy with respect to their functions. Even at the extreme of Stalinism the managers of major industrial enterprises were largely masters within their enterprises, and the individual industry was largely autonomous. So were the university, the research lab, and the military.

In the "pluralism" of yesterday—in societies in which control was shared by various institutions, such as feudal Europe in the Middle Ages and Edo Japan in the seventeenth and eighteenth centuries—pluralist organizations tried to be in control of whatever went on in their community. At least, they tried to prevent any other organization from having control of any community concern or community institution within their domain. But in the society of organizations each of the new institutions is concerned only with its own purpose and mission. It does not claim power over anything else. But it also does not assume responsibility for anything else. Who, then, is concerned with the common good?

This has always been a central problem of pluralism. No earlier pluralism solved it. The problem remains, but in a new guise. So far it has been seen as imposing limits on social institutions—forbidding them to do things in the pursuit of their mission, function, and interest which encroach upon the public domain or violate public policy. The laws against discrimination—by race, sex, age, educational level, health status, and so on—which have proliferated in the United States in the past forty years all forbid socially undesirable behavior. But we are increasingly raising the question of the social responsibility of social institutions: What do institutions have to do—in addition to discharging their own functions—to advance the public good? This, however, though nobody seems to realize it, is a demand to return to the old pluralism, the pluralism of feudalism. It is a demand that private hands assume public power.

This could seriously threaten the functioning of the new organizations, as the example of the schools in the United States makes abundantly clear. One of the major reasons for the steady decline in the capacity of the schools to do their job—that is, to teach children elementary knowledge skills—is surely that since the 1950s the United States has increasingly made the schools the carriers of all kinds of social policies: the elimination of racial discrimination, of discrimination against all other kinds of minorities, including the handicapped, and so on. Whether we have actually made any progress in assuaging social ills is highly debatable; so far the schools have not proved particularly effective as tools for social reform. But making the school the organ of social policies has, without any doubt, severely impaired its capacity to do its own job.

The new pluralism has a new problem: how to maintain the performance capacity of the new institutions and yet maintain the cohesion of society. This makes doubly important the emergence of a b and functioning social sector. It is an additional reason why the social sector will increasingly be crucial to the performance, if not to the cohesion, of the knowledge society.

Of the new organizations under consideration here, the first to arise, 120 years ago, was the business enterprise. It was only natural, therefore, that the problem of the emerging society of organizations was first seen as the relationship of government and business. It was also natural that the new interests were first seen as economic interests.

The first attempt to come to grips with the politics of the emerging society of organizations aimed, therefore, at making economic interests serve the political process. The first to pursue this goal was an American, Mark Hanna, the restorer of the Republican Party in the 1890s and, in many ways, the founding father of twentieth-century American politics. His definition of politics as a dynamic disequilibrium between the major economic interests—farmers, business, and labor—remained the foundation of American politics until the Second World War. In fact, Franklin D. Roosevelt restored the Democratic Party by reformulating Hanna. And the basic political position of this philosophy is evident in the title of the most influential political book written during the New Deal years—Politics: Who Gets What, When, How (1936), by Harold D. Lasswell.

Mark Hanna in 1896 knew very well that there are plenty of concerns other than economic concerns. And yet it was obvious to him—as it was to Roosevelt forty years later—that economic interests had to be used to integrate all the others. This is still the assumption underlying most analyses of American politics—and, in fact, of politics in all developed countries. But the assumption is no longer tenable. Underlying Hanna's formula of economic interests is the view of land, labor, and capital as the existing resources. But knowledge, the new resource for economic performance, is not in itself economic.

It cannot be bought or sold. The fruits of knowledge, such as the income from a patent, can be bought or sold; the knowledge that went into the patent cannot be conveyed at any price. No matter how much a suffering person is willing to pay a neurosurgeon, the neurosurgeon cannot sell to him—and surely cannot convey to him—the knowledge that is the foundation of the neurosurgeon's performance and income. The acquisition of knowledge has a cost, as has the acquisition of anything. But the acquisition of knowledge has no price.

Economic interests can therefore no longer integrate all other concerns and interests. As soon as knowledge became the key economic resource, the integration of interests—and with it the integration of the pluralism of a modern polity—began to be lost. Increasingly, non-economic interests are becoming the new pluralism—the special interests, the single-cause organizations, and so on. Increasingly, politics is not about "who gets what, when, how" but about values, each of them considered to be an absolute. Politics is about the right to life of the embryo in the womb as against the right of a woman to control her own body and to abort an embryo. It is about the environment. It is about gaining equality for groups alleged to be oppressed and discriminated against. None of these issues is economic. All are fundamentally moral.

Economic interests can be compromised, which is the great strength of basing politics on economic interests. "Half a loaf is still bread" is a meaningful saying. But half a baby, in the biblical story of the judgment of Solomon, is not half a child. No compromise is possible. To an environmentalist, half an endangered species is an extinct species.

This greatly aggravates the crisis of modern government. Newspapers and commentators still tend to report in economic terms what goes on in Washington, in London, in Bonn, or in Tokyo. But more and more of the lobbyists who determine governmental laws and governmental actions are no longer lobbyists for economic interests. They lobby for and against measures that they—and their paymasters—see as moral, spiritual, cultural. And each of these new moral concerns, each represented by a new organization, claims to stand for an absolute. Dividing their loaf is not compromise; it is treason.

There is thus in the society of organizations no one integrating force that pulls individual organizations in society and community into coalition. The traditional parties—perhaps the most successful political creations of the nineteenth century—can no longer integrate divergent groups and divergent points of view into a common pursuit of power. Rather, they have become battlefields between groups, each of them fighting for absolute victory and not content with anything but total surrender of the enemy.

The Need for Social and Political Innovation

The twenty-first century will surely be one of continuing social, economic, and political turmoil and challenge, at least in its early decades. What I have called the age of social transformation is not over yet. And the challenges looming ahead may be more serious and more daunting than those posed by the social transformations that have already come about, the social transformations of the twentieth century.

Yet we will not even have a chance to resolve these new and looming problems of tomorrow unless we first address the challenges posed by the developments that are already accomplished facts, the developments reported in the earlier sections of this essay. These are the priority tasks. For only if they are tackled can we in the developed democratic free market countries hope to have the social cohesion, the economic strength, and the governmental capacity needed to tackle the new challenges. The first order of business—for sociologists, political scientists, and economists; for educators; for business executives, politicians, and nonprofit-group leaders; for people in all walks of life, as parents, as employees, as citizens—is to work on these priority tasks, for few of which we so far have a precedent, let alone tested solutions.

We will have to think through education—its purpose, its values, its content. We will have to learn to define the quality of education and the productivity of education, to measure both and to manage both.

We need systematic work on the quality of knowledge and the productivity of knowledge—neither even defined so far. The performance capacity, if not the survival, of any organization in the knowledge society will come increasingly to depend on those two factors. But so will the performance capacity, if not the survival, of any individual in the knowledge society. And what responsibility does knowledge have? What are the responsibilities of the knowledge worker, and especially of a person with highly specialized knowledge?

Increasingly, the policy of any country—and especially of any developed country—will have to give primacy to the country's competitive position in an increasingly competitive world economy. Any proposed domestic policy needs to be shaped so as to improve that position, or at least to minimize adverse impacts on it. The same holds true for the policies and strategies of any institution within a nation, whether a local government, a business, a university, or a hospital.

But then we also need to develop an economic theory appropriate to a world economy in which knowledge has become the key economic resource and the dominant, if not the only, source of comparative advantage.

We are beginning to understand the new integrating mechanism: organization. But we still have to think through how to balance two apparently contradictory requirements. Organizations must competently perform the one social function for the sake of which they exist—the school to teach, the hospital to cure the sick, and the business to produce goods, services, or the capital to provide for the risks of the future. They can do so only if they single-mindedly concentrate on their specialized mission. But there is also society's need for these organizations to take social responsibility—to work on the problems and challenges of the community. Together these organizations are the community. The emergence of a b, independent, capable social sector—neither public sector nor private sector—is thus a central need of the society of organizations. But by itself it is not enough—the organizations of both the public and the private sector must share in the work.

The function of government and its functioning must be central to political thought and political action.

The megastate in which this century indulged has not performed, either in its totalitarian or in its democratic version. It has not delivered on a single one of its promises. And government by countervailing lobbyists is neither particularly effective—in fact, it is paralysis—nor particularly attractive. Yet effective government has never been needed more than in this highly competitive and fast-changing world of ours, in which the dangers created by the pollution of the physical environment are matched only by the dangers of worldwide armaments pollution. And we do not have even the beginnings of political theory or the political institutions needed for effective government in the knowledge-based society of organizations.

If the twentieth century was one of social transformations,

the twenty first century needs to be one of social and political innovations,

What Money Can't Buy

:

The Moral Limits of Markets

Michael J. Sandel

Should we pay children to read books or to get good grades? Should we allow corporations to pay for the right to pollute the atmosphere? Is it ethical to pay people to test risky new drugs or to donate their organs? What about hiring mercenaries to fight our wars? Auctioning admission to elite universities? Selling citizenship to immigrants willing to pay?

In What Money Can’t Buy, Michael J. Sandel takes on one of the biggest ethical questions of our time: Is there something wrong with a world in which everything is for sale? If so, how can we prevent market values from reaching into spheres of life where they don’t belong? What are the moral limits of markets?

In recent decades, market values have crowded out nonmarket norms in almost every aspect of life—medicine, education, government, law, art, sports, even family life and personal relations. Without quite realizing it, Sandel argues, we have drifted from having a market economy to being a market society. Is this where we want to be?

In his New York Times bestseller Justice, Sandel showed himself to be a master at illuminating, with clarity and verve, the hard moral questions we confront in our everyday lives. Now, in What Money Can’t Buy, he provokes an essential discussion that we, in our market-driven age, need to have: What is the proper role of markets in a democratic society—and how can we protect the moral and civic goods that markets don’t honor and that money can’t buy?

Biography

Michael Sandel is the Anne T. and Robert M. Bass Professor of Government at the University of Harvard. Sandel's legendary 'Justice' course is one of the most popular and influential at Harvard. In 2007, Harvard made Sandel's course available to alumni around the world through webstreaming and podcasting. Over 5,000 participants signed up, and Harvard Clubs from Mexico to Australia organized local discussion groups in connection with the course. In May 2007, Sandel delivered a series of lectures at major universities in China and he has been a visiting professor at the Sorbonne, Paris. He is a member of the American Academy of Arts and Sciences and the Council on Foreign Relations. Sandel is the author of many books and has previously written for the Atlantic Monthly, the New Republic and the New York Times. He was the 2009 BBC Reith Lecturer.

Egos and Immorality

By PAUL KRUGMAN

In the wake of a devastating financial crisis, President Obama has enacted some modest and obviously needed regulation; he has proposed closing a few outrageous tax loopholes; and he has suggested that Mitt Romney’s history of buying and selling companies, often firing workers and gutting their pensions along the way, doesn’t make him the right man to run America’s economy.

Wall Street has responded — predictably, I suppose — by whining and throwing temper tantrums. And it has, in a way, been funny to see how childish and thin-skinned the Masters of the Universe turn out to be. Remember when Stephen Schwarzman of the Blackstone Group compared a proposal to limit his tax breaks to Hitler’s invasion of Poland? Remember when Jamie Dimon of JPMorgan Chase characterized any discussion of income inequality as an attack on the very notion of success?

But here’s the thing

:

If Wall Streeters are spoiled brats, they are spoiled brats with immense power and wealth at their disposal.And what they’re trying to do with that power and wealth right now is buy themselves not just policies that serve their interests, but immunity from criticism.

Actually, before I get to that, let me take a moment to debunk a fairy tale that we’ve been hearing a lot from Wall Street and its reliable defenders — a tale in which the incredible damage runaway finance inflicted on the U.S. economy gets flushed down the memory hole, and financiers instead become the heroes who saved America.

Once upon a time, this fairy tale tells us, America was a land of lazy managers and slacker workers.

Productivity languished, and American industry was fading away in the face of foreign competition.

Then square-jawed, tough-minded buyout kings like Mitt Romney and the fictional Gordon Gekko came to the rescue, imposing financial and work discipline. Sure, some people didn’t like it, and, sure, they made a lot of money for themselves along the way. But the result was a great economic revival, whose benefits trickled down to everyone.

You can see why Wall Street likes this story. But none of it — except the bit about the Gekkos

and the Romneys making lots of money — is true.

For the alleged productivity surge never actually happened. In fact, overall business productivity in America grew faster in the postwar generation, an era in which banks were tightly regulated and private equity barely existed, than it has since our political system decided that greed was good.

What about international competition?

We now think of America as a nation doomed to perpetual trade deficits, but it was not always thus. From the 1950s through the 1970s, we generally had more or less balanced trade, exporting about as much as we imported. The big trade deficits only started in the Reagan years, that is, during the era of runaway finance.

And what about that trickle-down? It never took place. There have been significant productivity gains these past three decades, although not on the scale that Wall Street’s self-serving legend would have you believe. However, only a small part of those gains got passed on to American workers.

So, no, financial wheeling and dealing did not do wonders for the American economy, and there are real questions about why, exactly, the wheeler-dealers have made so much money while generating such dubious results.

Those are, however, questions that the wheeler-dealers don’t want asked — and not, I think, just because they want to defend their tax breaks and other privileges. It’s also an ego thing. Vast wealth isn’t enough; they want deference, too, and they’re doing their best to buy it. It has been amazing to read about erstwhile Democrats on Wall Street going all in for Mitt Romney, not because they believe that he has good policy ideas, but because they’re taking President Obama’s very mild criticism of financial excesses as a personal insult.

And it has been especially sad to see some Democratic politicians with ties to Wall Street, like Newark’s mayor, Cory Booker, dutifully rise to the defense of their friends’ surprisingly fragile egos.

As I said at the beginning, in a way Wall Street’s self-centered, self-absorbed behavior has been kind of funny. But while this behavior may be funny, it is also deeply immoral.

Think about where we are right now, in the fifth year of a slump brought on by irresponsible bankers. The bankers themselves have been bailed out, but the rest of the nation continues to suffer terribly, with long-term unemployment still at levels not seen since the Great Depression, with a whole cohort of young Americans graduating into an abysmal job market.

And in the midst of this national nightmare, all too many members of the economic elite seem mainly concerned with the way the president apparently hurt their feelings.

That isn’t funny.

It’s shameful.

A version of this op-ed appeared in print on May 25, 2012, on page A31 of the New York edition with the headline: Egos And Immorality.

(The online version of this article appears in three parts. Click here to go to parts two and three.)

THE truly revolutionary impact of the Information Revolution is just beginning to be felt. But it is not "information" that fuels this impact. It is not "artificial intelligence." It is not the effect of computers and data processing on decision-making, policymaking, or strategy. It is something that practically no one foresaw or, indeed, even talked about ten or fifteen years ago: e-commerce -- that is, the explosive emergence of the Internet as a major, perhaps eventually the major, worldwide distribution channel for goods, for services, and, surprisingly, for managerial and professional jobs. This is profoundly changing economies, markets, and industry structures; products and services and their flow; consumer segmentation, consumer values, and consumer behavior; jobs and labor markets. But the impact may be even greater on societies and politics and, above all, on the way we see the world and ourselves in it.

At the same time, new and unexpected industries will no doubt emerge, and fast. One is already here: biotechnology. And another: fish farming. Within the next fifty years fish farming may change us from hunters and gatherers on the seas into "marine pastoralists" -- just as a similar innovation some 10,000 years ago changed our ancestors from hunters and gatherers on the land into agriculturists and pastoralists.

It is likely that other new technologies will appear suddenly, leading to major new industries. What they may be is impossible even to guess at. But it is highly probable -- indeed, nearly certain -- that they will emerge, and fairly soon. And it is nearly certain that few of them -- and few industries based on them -- will come out of computer and information technology. Like biotechnology and fish farming, each will emerge from its own unique and unexpected technology.

Of course, these are only predictions. But they are made on the assumption that the Information Revolution will evolve as several earlier technology-based "revolutions" have evolved over the past 500 years, since Gutenberg's printing revolution, around 1455. In particular the assumption is that the Information Revolution will be like the Industrial Revolution of the late eighteenth and early nineteenth centuries. And that is indeed exactly how the Information Revolution has been during its first fifty years.

The Railroad

HE Information Revolution is now at the point at which the Industrial Revolution was in the early 1820s, about forty years after James Watt's improved steam engine (first installed in 1776) was first applied, in 1785, to an industrial operation -- the spinning of cotton. And the steam engine was to the first Industrial Revolution what the computer has been to the Information Revolution -- its trigger, but above all its symbol. Almost everybody today believes that nothing in economic history has ever moved as fast as, or had a greater impact than, the Information Revolution. But the Industrial Revolution moved at least as fast in the same time span, and had probably an equal impact if not a greater one. In short order it mechanized the great majority of manufacturing processes, beginning with the production of the most important industrial commodity of the eighteenth and early nineteenth centuries: textiles. Moore's Law asserts that the price of the Information Revolution's basic element, the microchip, drops by 50 percent every eighteen months. The same was true of the products whose manufacture was mechanized by the first Industrial Revolution. The price of cotton textiles fell by 90 percent in the fifty years spanning the start of the eighteenth century. The production of cotton textiles increased at least 150-fold in Britain alone in the same period. And although textiles were the most visible product of its early years, the Industrial Revolution mechanized the production of practically all other major goods, such as paper, glass, leather, and bricks. Its impact was by no means confined to consumer goods. The production of iron and ironware -- for example, wire -- became mechanized and steam-driven as fast as did that of textiles, with the same effects on cost, price, and output. By the end of the Napoleonic Wars the making of guns was steam-driven throughout Europe; cannons were made ten to twenty times as fast as before, and their cost dropped by more than two thirds. By that time Eli Whitney had similarly mechanized the manufacture of muskets in America and had created the first mass-production industry.

These forty or fifty years gave rise to the factory and the "working class." Both were still so few in number in the mid-1820s, even in England, as to be statistically insignificant. But psychologically they had come to dominate (and soon would politically also). Before there were factories in America, Alexander Hamilton foresaw an industrialized country in his 1791 Report on Manufactures. A decade later, in 1803, a French economist, Jean-Baptiste Say, saw that the Industrial Revolution had changed economics by creating the "entrepreneur."

The social consequences went far beyond factory and working class. As the historian Paul Johnson has pointed out, in A History of the American People (1997), it was the explosive growth of the steam-engine-based textile industry that revived slavery. Considered to be practically dead by the Founders of the American Republic, slavery roared back to life as the cotton gin -- soon steam-driven -- created a huge demand for low-cost labor and made breeding slaves America's most profitable industry for some decades.

The Industrial Revolution also had a great impact on the family. The nuclear family had long been the unit of production. On the farm and in the artisan's workshop husband, wife, and children worked together. The factory, almost for the first time in history, took worker and work out of the home and moved them into the workplace, leaving family members behind -- whether spouses of adult factory workers or, especially in the early stages, parents of child factory workers.

Indeed, the "crisis of the family" did not begin after the Second World War. It began with the Industrial Revolution -- and was in fact a stock concern of those who opposed the Industrial Revolution and the factory system. (The best description of the divorce of work and family, and of its effect on both, is probably Charles Dickens's 1854 novel Hard Times.)

But despite all these effects, the Industrial Revolution in its first half century only mechanized the production of goods that had been in existence all along. It tremendously increased output and tremendously decreased cost. It created both consumers and consumer products. But the products themselves had been around all along. And products made in the new factories differed from traditional products only in that they were uniform, with fewer defects than existed in products made by any but the top craftsmen of earlier periods.

There was only one important exception, one new product, in those first fifty years: the steamboat, first made practical by Robert Fulton in 1807. It had little impact until thirty or forty years later. In fact, until almost the end of the nineteenth century more freight was carried on the world's oceans by sailing vessels than by steamships.

Then, in 1829, came the railroad, a product truly without precedent, and it forever changed economy, society, and politics.

In retrospect it is difficult to imagine why the invention of the railroad took so long. Rails to move carts had been around in coal mines for a very long time. What could be more obvious than to put a steam engine on a cart to drive it, rather than have it pushed by people or pulled by horses? But the railroad did not emerge from the cart in the mines. It was developed quite independently. And it was not intended to carry freight. On the contrary, for a long time it was seen only as a way to carry people. Railroads became freight carriers thirty years later, in America. (In fact, as late as the 1870s and 1880s the British engineers who were hired to build the railroads of newly Westernized Japan designed them to carry passengers -- and to this day Japanese railroads are not equipped to carry freight.) But until the first railroad actually began to operate, it was virtually unanticipated.

Within five years, however, the Western world was engulfed by the biggest boom history had ever seen -- the railroad boom. Punctuated by the most spectacular busts in economic history, the boom continued in Europe for thirty years, until the late 1850s, by which time most of today's major railroads had been built. In the United States it continued for another thirty years, and in outlying areas -- Argentina, Brazil, Asian Russia, China -- until the First World War. The railroad was the truly revolutionary element of the Industrial Revolution, for not only did it create a new economic dimension but also it rapidly changed what I would call the mental geography. For the first time in history human beings had true mobility. For the first time the horizons of ordinary people expanded. Contemporaries immediately realized that a fundamental change in mentality had occurred. (A good account of this can be found in what is surely the best portrayal of the Industrial Revolution's society in transition, George Eliot's 1871 novel Middlemarch.) As the great French historian Fernand Braudel pointed out in his last major work, The Identity of France (1986), it was the railroad that made France into one nation and one culture. It had previously been a congeries of self-contained regions, held together only politically. And the role of the railroad in creating the American West is, of course, a commonplace in U.S. history.

Routinization

IKE the Industrial Revolution two centuries ago, the Information Revolution so far -- that is, since the first computers, in the mid-1940s -- has only transformed processes that were here all along. In fact, the real impact of the Information Revolution has not been in the form of "information" at all. Almost none of the effects of information envisaged forty years ago have actually happened. For instance, there has been practically no change in the way major decisions are made in business or government. But the Information Revolution has routinized traditional processes in an untold number of areas.

The software for tuning a piano converts a process that traditionally took three hours into one that takes twenty minutes. There is software for payrolls, for inventory control, for delivery schedules, and for all the other routine processes of a business. Drawing the inside arrangements of a major building (heating, water supply, sewerage, and so on) such as a prison or a hospital formerly took, say, twenty-five highly skilled draftsmen up to fifty days; now there is a program that enables one draftsman to do the job in a couple of days, at a tiny fraction of the cost. There is software to help people do their tax returns and software that teaches hospital residents how to take out a gall bladder. The people who now speculate in the stock market online do exactly what their predecessors in the 1920s did while spending hours each day in a brokerage office. The processes have not been changed at all. They have been routinized, step by step, with a tremendous saving in time and, often, in cost. The psychological impact of the Information Revolution, like that of the Industrial Revolution, has been enormous. It has perhaps been greatest on the way in which young children learn. Beginning at age four (and often earlier), children now rapidly develop computer skills, soon surpassing their elders; computers are their toys and their learning tools. Fifty years hence we may well conclude that there was no "crisis of American education" in the closing years of the twentieth century -- there was only a growing incongruence between the way twentieth-century schools taught and the way late-twentieth-century children learned. Something similar happened in the sixteenth-century university, a hundred years after the invention of the printing press and movable type.

But as to the way we work, the Information Revolution has so far simply routinized what was done all along. The only exception is the CD-ROM, invented around twenty years ago to present operas, university courses, a writer's oeuvre, in an entirely new way. Like the steamboat, the CD-ROM has not immediately caught on.

The Meaning of E-commerce

E-COMMERCE is to the Information Revolution what the railroad was to the Industrial Revolution -- a totally new, totally unprecedented, totally unexpected development. And like the railroad 170 years ago, e-commerce is creating a new and distinct boom, rapidly changing the economy, society, and politics. One example: A mid-sized company in America's industrial Midwest, founded in the 1920s and now run by the grandchildren of the founder, used to have some 60 percent of the market in inexpensive dinnerware for fast-food eateries, school and office cafeterias, and hospitals within a hundred-mile radius of its factory. China is heavy and breaks easily, so cheap china is traditionally sold within a small area. Almost overnight this company lost more than half of its market. One of its customers, a hospital cafeteria where someone went "surfing" on the Internet, discovered a European manufacturer that offered china of apparently better quality at a lower price and shipped cheaply by air. Within a few months the main customers in the area shifted to the European supplier. Few of them, it seems, realize -- let alone care -- that the stuff comes from Europe.

In the new mental geography created by the railroad, humanity mastered distance. In the mental geography of e-commerce, distance has been eliminated. There is only one economy and only one market.

One consequence of this is that every business must become globally competitive, even if it manufactures or sells only within a local or regional market. The competition is not local anymore -- in fact, it knows no boundaries. Every company has to become transnational in the way it is run. Yet the traditional multinational may well become obsolete. It manufactures and distributes in a number of distinct geographies, in which it is a local company. But in e-commerce there are neither local companies nor distinct geographies. Where to manufacture, where to sell, and how to sell will remain important business decisions. But in another twenty years they may no longer determine what a company does, how it does it, and where it does it.

At the same time, it is not yet clear what kinds of goods and services will be bought and sold through e-commerce and what kinds will turn out to be unsuitable for it. This has been true whenever a new distribution channel has arisen. Why, for instance, did the railroad change both the mental and the economic geography of the West, whereas the steamboat -- with its equal impact on world trade and passenger traffic -- did neither? Why was there no "steamboat boom"?

Equally unclear has been the impact of more-recent changes in distribution channels -- in the shift, for instance, from the local grocery store to the supermarket, from the individual supermarket to the supermarket chain, and from the supermarket chain to Wal-Mart and other discount chains. It is already clear that the shift to e-commerce will be just as eclectic and unexpected.

Here are a few examples. Twenty-five years ago it was generally believed that within a few decades the printed word would be dispatched electronically to individual subscribers' computer screens. Subscribers would then either read text on their computer screens or download it and print it out. This was the assumption that underlay the CD-ROM. Thus any number of newspapers and magazines, by no means only in the United States, established themselves online; few, so far, have become gold mines. But anyone who twenty years ago predicted the business of Amazon.com and barnesandnoble.com -- that is, that books would be sold on the Internet but delivered in their heavy, printed form -- would have been laughed off the podium. Yet Amazon.com and barnesandnoble.com are in exactly that business, and they are in it worldwide. The first order for the U.S. edition of my most recent book, Management Challenges for the 21st Century (1999), came to Amazon.com, and it came from Argentina. Another example: Ten years ago one of the world's leading automobile companies made a thorough study of the expected impact on automobile sales of the then emerging Internet. It concluded that the Internet would become a major distribution channel for used cars, but that customers would still want to see new cars, to touch them, to test-drive them. In actuality, at least so far, most used cars are still being bought not over the Internet but in a dealer's lot. However, as many as half of all new cars sold (excluding luxury cars) may now actually be "bought" over the Internet. Dealers only deliver cars that customers have chosen well before they enter the dealership. What does this mean for the future of the local automobile dealership, the twentieth century's most profitable small business?

Another example: Traders in the American stock-market boom of 1998 and 1999 increasingly buy and sell online. But investors seem to be shifting away from buying electronically. The major U.S. investment vehicle is mutual funds. And whereas almost half of all mutual funds a few years ago were bought electronically, it is estimated that the figure will drop to 35 percent next year and to 20 percent by 2005. This is the opposite of what "everybody expected" ten or fifteen years ago.

The fastest-growing e-commerce in the United States is in an area where there was no "commerce" until now -- in jobs for professionals and managers. Almost half of the world's largest companies now recruit through Web sites, and some two and a half million managerial and professional people (two thirds of them not even engineers or computer professionals) have their résumés on the Internet and solicit job offers over it. The result is a completely new labor market.

This illustrates another important effect of e-commerce. New distribution channels change who the customers are. They change not only how customers buy but also what they buy. They change consumer behavior, savings patterns, industry structure -- in short, the entire economy. This is what is now happening, and not only in the United States but increasingly in the rest of the developed world, and in a good many emerging countries, including mainland China.

Luther, Machiavelli, and the Salmon

HE railroad made the Industrial Revolution accomplished fact. What had been revolution became establishment. And the boom it triggered lasted almost a hundred years. The technology of the steam engine did not end with the railroad. It led in the 1880s and 1890s to the steam turbine, and in the 1920s and 1930s to the last magnificent American steam locomotives, so beloved by railroad buffs. But the technology centered on the steam engine and in manufacturing operations ceased to be central. Instead the dynamics of the technology shifted to totally new industries that emerged almost immediately after the railroad was invented, not one of which had anything to do with steam or steam engines. The electric telegraph and photography were first, in the 1830s, followed soon thereafter by optics and farm equipment. The new and different fertilizer industry, which began in the late 1830s, in short order transformed agriculture. Public health became a major and central growth industry, with quarantine, vaccination, the supply of pure water, and sewers, which for the first time in history made the city a more healthful habitat than the countryside. At the same time came the first anesthetics.

With these major new technologies came major new social institutions: the modern postal service, the daily paper, investment banking, and commercial banking, to name just a few. Not one of them had much to do with the steam engine or with the technology of the Industrial Revolution in general. It was these new industries and institutions that by 1850 had come to dominate the industrial and economic landscape of the developed countries.

This is very similar to what happened in the printing revolution -- the first of the technological revolutions that created the modern world. In the fifty years after 1455, when Gutenberg had perfected the printing press and movable type he had been working on for years, the printing revolution swept Europe and completely changed its economy and its psychology. But the books printed during the first fifty years, the ones called incunabula, contained largely the same texts that monks, in their scriptoria, had for centuries laboriously copied by hand: religious tracts and whatever remained of the writings of antiquity. Some 7,000 titles were published in those first fifty years, in 35,000 editions. At least 6,700 of these were traditional titles. In other words, in its first fifty years printing made available -- and increasingly cheap -- traditional information and communication products. But then, some sixty years after Gutenberg, came Luther's German Bible -- thousands and thousands of copies sold almost immediately at an unbelievably low price. With Luther's Bible the new printing technology ushered in a new society. It ushered in Protestantism, which conquered half of Europe and, within another twenty years, forced the Catholic Church to reform itself in the other half. Luther used the new medium of print deliberately to restore religion to the center of individual life and of society. And this unleashed a century and a half of religious reform, religious revolt, religious wars.

At the very same time, however, that Luther used print with the avowed intention of restoring Christianity, Machiavelli wrote and published The Prince (1513), the first Western book in more than a thousand years that contained not one biblical quotation and no reference to the writers of antiquity. In no time at all The Prince became the "other best seller" of the sixteenth century, and its most notorious but also most influential book. In short order there was a wealth of purely secular works, what we today call literature: novels and books in science, history, politics, and, soon, economics. It was not long before the first purely secular art form arose, in England -- the modern theater. Brand-new social institutions also arose: the Jesuit order, the Spanish infantry, the first modern navy, and, finally, the sovereign national state. In other words, the printing revolution followed the same trajectory as did the Industrial Revolution, which began 300 years later, and as does the Information Revolution today.

What the new industries and institutions will be, no one can say yet. No one in the 1520s anticipated secular literature, let alone the secular theater. No one in the 1820s anticipated the electric telegraph, or public health, or photography.

The one thing (to say it again) that is highly probable, if not nearly certain, is that the next twenty years will see the emergence of a number of new industries. At the same time, it is nearly certain that few of them will come out of information technology, the computer, data processing, or the Internet. This is indicated by all historical precedents. But it is true also of the new industries that are already rapidly emerging. Biotechnology, as mentioned, is already here. So is fish farming.

Twenty-five years ago salmon was a delicacy. The typical convention dinner gave a choice between chicken and beef. Today salmon is a commodity, and is the other choice on the convention menu. Most salmon today is not caught at sea or in a river but grown on a fish farm. The same is increasingly true of trout. Soon, apparently, it will be true of a number of other fish. Flounder, for instance, which is to seafood what pork is to meat, is just going into oceanic mass production. This will no doubt lead to the genetic development of new and different fish, just as the domestication of sheep, cows, and chickens led to the development of new breeds among them.

But probably a dozen or so technologies are at the stage where biotechnology was twenty-five years ago -- that is, ready to emerge.

There is also a service waiting to be born: insurance against the risks of foreign-exchange exposure. Now that every business is part of the global economy, such insurance is as badly needed as was insurance against physical risks (fire, flood) in the early stages of the Industrial Revolution, when traditional insurance emerged. All the knowledge needed for foreign-exchange insurance is available; only the institution itself is still lacking.

The next two or three decades are likely to see even greater technological change than has occurred in the decades since the emergence of the computer, and also even greater change in industry structures, in the economic landscape, and probably in the social landscape as well.

The Gentleman Versus the Technologist

HE new industries that emerged after the railroad owed little technologically to the steam engine or to the Industrial Revolution in general. They were not its "children after the flesh" -- but they were its "children after the spirit." They were possible only because of the mind-set that the Industrial Revolution had created and the skills it had developed. This was a mind-set that accepted -- indeed, eagerly welcomed -- invention and innovation. It was a mind-set that accepted, and eagerly welcomed, new products and new services. It also created the social values that made possible the new industries. Above all, it created the "technologist." Social and financial success long eluded the first major American technologist, Eli Whitney, whose cotton gin, in 1793, was as central to the triumph of the Industrial Revolution as was the steam engine. But a generation later the technologist -- still self-taught -- had become the American folk hero and was both socially accepted and financially rewarded. Samuel Morse, the inventor of the telegraph, may have been the first example; Thomas Edison became the most prominent. In Europe the "businessman" long remained a social inferior, but the university-trained engineer had by 1830 or 1840 become a respected "professional."

By the 1850s England was losing its predominance and beginning to be overtaken as an industrial economy, first by the United States and then by Germany. It is generally accepted that neither economics nor technology was the major reason. The main cause was social. Economically, and especially financially, England remained the great power until the First World War. Technologically it held its own throughout the nineteenth century. Synthetic dyestuffs, the first products of the modern chemical industry, were invented in England, and so was the steam turbine. But England did not accept the technologist socially. He never became a "gentleman." The English built first-rate engineering schools in India but almost none at home. No other country so honored the "scientist" -- and, indeed, Britain retained leadership in physics throughout the nineteenth century, from James Clerk Maxwell and Michael Faraday all the way to Ernest Rutherford. But the technologist remained a "tradesman." (Dickens, for instance, showed open contempt for the upstart ironmaster in his 1853 novel Bleak House.)

Nor did England develop the venture capitalist, who has the means and the mentality to finance the unexpected and unproved. A French invention, first portrayed in Balzac's monumental La Comédie humaine, in the 1840s, the venture capitalist was institutionalized in the United States by J. P. Morgan and, simultaneously, in Germany and Japan by the universal bank. But England, although it invented and developed the commercial bank to finance trade, had no institution to finance industry until two German refugees, S. G. Warburg and Henry Grunfeld, started an entrepreneurial bank in London, just before the Second World War.

Bribing the Knowledge Worker

HAT might be needed to prevent the United States from becoming the England of the twenty-first century? I am convinced that a drastic change in the social mind-set is required -- just as leadership in the industrial economy after the railroad required the drastic change from "tradesman" to "technologist" or "engineer."

What we call the Information Revolution is actually a Knowledge Revolution. What has made it possible to routinize processes is not machinery; the computer is only the trigger. Software is the reorganization of traditional work, based on centuries of experience, through the application of knowledge and especially of systematic, logical analysis. The key is not electronics; it is cognitive science. This means that the key to maintaining leadership in the economy and the technology that are about to emerge is likely to be the social position of knowledge professionals and social acceptance of their values. For them to remain traditional "employees" and be treated as such would be tantamount to England's treating its technologists as tradesmen -- and likely to have similar consequences.

Today, however, we are trying to straddle the fence -- to maintain the traditional mind-set, in which capital is the key resource and the financier is the boss, while bribing knowledge workers to be content to remain employees by giving them bonuses and stock options. But this, if it can work at all, can work only as long as the emerging industries enjoy a stock-market boom, as the Internet companies have been doing. The next major industries are likely to behave far more like traditional industries -- that is, to grow slowly, painfully, laboriously.

The early industries of the Industrial Revolution -- cotton textiles, iron, the railroads -- were boom industries that created millionaires overnight, like Balzac's venture bankers and like Dickens's ironmaster, who in a few years grew from a lowly domestic servant into a "captain of industry." The industries that emerged after 1830 also created millionaires. But they took twenty years to do so, and it was twenty years of hard work, of struggle, of disappointments and failures, of thrift. This is likely to be true of the industries that will emerge from now on. It is already true of biotechnology.

Bribing the knowledge workers on whom these industries depend will therefore simply not work. The key knowledge workers in these businesses will surely continue to expect to share financially in the fruits of their labor. But the financial fruits are likely to take much longer to ripen, if they ripen at all. And then, probably within ten years or so, running a business with (short-term) "shareholder value" as its first -- if not its only -- goal and justification will have become counterproductive. Increasingly, performance in these new knowledge-based industries will come to depend on running the institution so as to attract, hold, and motivate knowledge workers. When this can no longer be done by satisfying knowledge workers' greed, as we are now trying to do, it will have to be done by satisfying their values, and by giving them social recognition and social power. It will have to be done by turning them from subordinates into fellow executives, and from employees, however well paid, into partners.

The online version of this article appears in three parts. Click here to go to parts one and two.

Economics and Morality

:

Paul Krugman's Framing

Lakoff and Wehling are authors of The Little Blue Book: The Essential Guide to Thinking and Talking Democratic, where morally-based framing is discussed in great detail.

In his June 11, 2012 op-ed in the New York Times, Paul Krugman goes beyond economic analysis to bring up the morality and the conceptual framing that determines economic policy. He speaks of "the people the economy is supposed to serve" -- "the unemployed," and "workers"-- and "the mentality that sees economic pain as somehow redeeming."

Krugman is right to bring these matters up. Markets are not provided by nature. They are constructed -- by laws, rules, and institutions. All of these have moral bases of one sort or another. Hence, all markets are moral, according to someone's sense of morality. The only question is, Whose morality? In contemporary America, it is conservative versus progressive morality that governs forms of economic policy. The systems of morality behind economic policies need to be discussed.

Most Democrats, consciously or mostly unconsciously, use a moral view deriving from an idealized notion of nurturant parenting, a morality based on caring about their fellow citizens, and acting responsibly both for themselves and others with what President Obama has called "an ethic of excellence" -- doing one's best not just for oneself, but for one's family, community, and country, and for the world. Government on this view has two moral missions: to protect and empower everyone equally.

The means is The Public, which provides infrastructure, public education, and regulations to maximize health, protection and justice, a sustainable environment, systems for information and transportation, and so forth. The Public is necessary for The Private, especially private enterprise, which relies on all of the above. The liberal market economy maximizes overall freedom by serving public needs: providing needed products at reasonable prices for reasonable profits, paying workers fairly and treating them well, and serving the communities to which they belong. In short, "the people the economy is supposed to serve" are ordinary citizens. This has been the basis of American democracy from the beginning.

Conservatives hold a different moral perspective, based on an idealized notion of a strict father family. In this model, the father is The Decider, who is in charge, knows right from wrong, and teaches children morality by punishing them painfully when they do wrong, so that they can become disciplined enough to do right and thrive in the market. If they are not well-off, they are not sufficiently disciplined and so cannot be moral: they deserve their poverty. Applied to conservative politics, this yields a moral hierarchy with the wealthy, morally disciplined citizens deservedly on the top.

Democracy is seen as providing liberty, the freedom to seek one's self interest with minimal responsibility for the interests or well-being of others. It is laissez-faire liberty. Responsibility is personal, not social. People should be able to be their own strict fathers, Deciders on their own -- the ideal of conservative populists, who are voting their morality not their economic interests. Those who are needy are assumed to be weak and undisciplined and therefore morally lacking. The most moral people are the rich. The slogan, "Let the market decide," sees the market itself as The Decider, the ultimate authority, where there should be no government power over it to regulate, tax, protect workers, and to impose fines in tort cases. Those with no money are undisciplined, not moral, and so should be punished. The poor can earn redemption only by suffering and thus, supposedly, getting an incentive to do better.

If you believe all of this, and if you see the world only from this perspective, then you cannot possibly perceive the deep economic truth that The Public is necessary for The Private, for a decent private life and private enterprise. The denial of this truth, and the desire to eliminate The Public altogether, can unfortunately come naturally and honestly via this moral perspective.

When Krugman speaks of those who have "the mentality that sees economic pain as somehow redeeming," he is speaking of those who have ordinary conservative morality, the more than forty percent who voted for John McCain and who now support Mitt Romney -- and Angela Merkel's call for "austerity" in Germany. It is conservative moral thought that gives the word "austerity" a positive moral connotation.

Just as the authority of a strict father must always be maintained, so the highest value in this conservative moral system is the preservation, extension, and ultimate victory of the conservative moral system itself. Preaching about the deficit is only a means to an end -- eliminating funding for The Public and bringing us closer to permanent conservative domination. From this perspective, the Paul Ryan budget makes sense -- cut funding for The Public (the antithesis of conservative morality) and reward the rich (who are the best people from a conservative moral perspective). Economic truth is irrelevant here.

Historically, American democracy is premised on the moral principle that citizens care about each other and that a robust Public is the way to act on that care. Who is the market economy for? All of us. Equally. But with the sway of conservative morality, we are moving toward a 1 percent economy -- for the bankers, the wealthy investors, and the super rich like the six members of the family that owns Walmart and has accumulated more wealth than the bottom 30 percent of Americans. Six people!

What is wrong with a 1 percent economy? As Joseph Stiglitz has pointed out in The Price of Inequality, the 1 percent economy eliminates opportunity for over a hundred million Americans. From the Land of Opportunity, we are in danger of becoming the Land of Opportunism.

If there is hope in our present situation, it lies with people who are morally complex, who are progressive on some issues and conservative on others -- often called "moderates," "independents," and "swing voters."They have both moral systems in their brains: when one is turned on, the other is turned off. The one that is turned on more often gets strongest. Quoting conservative language, even to argue against it, just strengthens conservatism in the brain of people who are morally complex. It is vital that they hear the progressive values of the traditional American moral system, the truth that The Public is necessary for The Private, the truth that our freedom depends on a robust Public, and that the economy is for all of us.

We must talk about those truths -- over and over, every day.

To help, we have written The Little Blue Book. It can be ordered from Barnes & Noble, Amazon, and iTunes, and after June 26 at your local bookstore.

The current state of today’s youth reflects the reality of the 21st century.

On the surface, our youngsters may appear superficial, indifferent, lost in a virtual realm of smartphones and tablets, but the truth of the matter is that they are far more developed than we are. It is a generation that lives and breathes in a connected, fast, and integral world where political borders do not exist, and where all are citizens of a global sphere. If we look into their hearts, we will discover an entire generation that does not settle for following the safe route of college, career, children, but seeks to know “What for?”. They wish to understand their role in the world and the type of connections they must establish to obtain happiness.

This is why they are defiant toward the schooling systems we have built.

It is not lifeless information that they need; they need education

in the fullest sense of the word.

The educational agenda of the 21st century need not be force-feeding information into youths. Rather, it should provide social skills that help them overcome alienation and mistrust that abound in today’s society. To make the youth an active force in initiating social change, we must help them understand the laws of the new world.

Even more important, we must help them see what they can do to use these laws to their favor.

In their unique way, today’s youngsters are compelling us—the generation of the old world—to realize that the world has changed and that we must rethink our position in it.

It's Time For a Learning Revolution

The United States education system really sucks. We continue to toil in a 19th century factory-based model of education, stressing conformity and standardization. This is all true even though globalization has transformed the world we live in, flipping the status quo of the labor market upside down. The education system has miserably failed in creating students that have the dexterity to think creatively and critically, work collaboratively, and communicate their thoughts.

Over the past decade, when government has tried to muddle its way through education, it has gotten fairly ugly. President Bush passed No Child Left Behind and President Obama passed Race to the Top, infatuating our schools with a culture of fill in the bubble tests and drill-and-kill teaching methods. Schools were transformed into test-preparation factories and the process of memorization and regurgitation hijacked classroom learning.

Our society has failed to understand what's at stake. For the 21st century American economy, all economic value will derive from entrepreneurship and innovation. Low-cost manufacturing will essentially be wiped out of this country and shipped to China, India, and other nations. While we may have the top companies in the world, as in Apple and Google, our competitive edge is at risk. The education system was designed to create well-disciplined employees, not entrepreneurs and innovators. According to Cathy N. Davidson, co-director of the annual MacArthur Foundation Digital Media and Learning Competitions, 65 percent of today's grade-school kids may end up doing work that hasn't been invented yet.

I propose that we institute a 21st century model of education, rooted in 21st century learning skills and creativity, imagination, discovery, and project-based learning. We need to stop telling kids to shut up, sit down, and listen to the teacher passively. As Sir Ken Robinson said in his well-acclaimed TED talk, "Schools kill creativity."

Policy-wise, we need a national curriculum, based on lean standards, so that teachers have the full autonomy to shape and mold the curriculum. Ironically enough, The Onion, a satirical newspaper, published a story in August 2011 with the headline, "Nation's Students to Give American Education System Yet Another Chance." We'll continue to get burned by the system year after year after year if we do absolutely nothing.

I'm a 16-year-old student at Syosset High School in New York, and I'm currently writing a book on education reform, Time to Think Different: Why America Needs a Learning Revolution (tentative). It was the great education reformer, Paulo Freire who perceptively noted, "If the structure does not permit dialogue, the structure must be changed."

Students are left out of the debate, even thought we have the most important opinions. I'm writing this book to offer a unique student perspective on the issue. Instead of schools cherishing students' passions and interests, they destroy them. Let's raise kids to dream big and think different. America will need to re-kindle the innovative spirit that has propelled in the past. It's a do or die moment. Bring on the learning revolution!

I hear your concer and your passion. I am a Montessori Directress­, and I believe in that creativity­! I don't know how much you know about Montessori­, but that's what we do, we believe in following and guiding that inner creativity­. I was a Montessori child myself, and the joy and passion I have for learning, for contribuit­ing to my community and for following my deepest dreams, I give it to Montessori­. Thank you for speaking up, because people like you lead, inspire and expand what we are as humanity!
Don't stop!
Looking forward to read your book!
Mar

Monday, 26 March 2018

The Distraction of Data

:

How Brand Research Misses

By: Douglas Van Praet

In the latest in his series on neuroscience and marketing, Douglas Van Praet discusses why traditional research on "brand awareness" misses what really drives a buy, and the power of positive and negative associations in moving markets.

Along with sales, marketers primarily gauge their performance by measuring awareness and brand attributes ratings in surveys. And this seems to make sense. That’s how the mind works--by recognizing and responding to associative patterns.

But here’s the rub. People are often aware of the ad messages; what they are unaware of is how they are influenced by the messages. The attributes that drive decisions are often unstated because they are unconscious, or what cognitive scientists call non-declarative or implicit memory.

These implicit associations often determine preferences through gut feelings that override critical thinking.

Melanie Dempsey of Ryerson University and Andrew A. Mitchell of the University of Toronto demonstrated this when they exposed participants to made-up brands paired with a set of pictures and words, some negative and some positive. After seeing hundreds of images paired with brands, the subjects were unable to recall which brands were associated with which pictures and words, but they still expressed a preference for the positively conditioned brands. The authors of the study labeled it the “I like it, but I don’t know why” effect.

In a follow-up experiment, participants were presented with product information that contradicted their earlier impressions, offering them reasons to reject their brand preferences, but they still chose those with the positive associations. Conflicting factual information did not undo the prior conditioning. The associated feelings superseded rational analysis.

This happens partly because the brain’s emotional systems can function independently from the cortex, the seat of consciousness. Therefore memories and response repertoires can be formed without us ever knowing.

Neuroscientist Joseph LeDoux gives an example of how this might happen. Let’s say that you have an argument during lunch with someone while seated at a table with a red-and-white checkered tablecloth. The next day, you meet another man who happens to be wearing a red-and-white checkered necktie and you have this gut feeling that you don’t like him. As LeDoux explains, “Consciously, I’m saying it’s my gut feeling because I don’t like the way he looks . . . But in fact, it’s being triggered by external stimuli that I’m not processing consciously.”

Through repetition of exposure to other colored products, our unconscious minds have learned to associate the color green with the feeling of fresh and clean, overriding the reasons for buying whitening toothpaste.

This is why Coke Clear and Crystal Pepsi failed in the early 1990s. People didn’t prefer clear cola, because the rich brown hue of cola is steeped in fond memories that color our beliefs not just the drink. We see with our brains not just our eyes.

The evolutionary psychologist Geoffrey Miller believes that humans display brands like proud peacocks exhibit their tail feathers, as “fitness indicators” that advertise their potential as mates. Peacocks spread their intricate plumage to imply their natural beauty conferred by good genes, their ability to find ample food to sustain the health of the tremendous tail, and their speed and agility in avoiding predators in spite of its cumbersome size. Generally animals don’t have any conscious awareness as to why they display these indicators; the urge simply comes to them and they reap the evolutionary benefits of greater attractiveness.

Humans also advertise their “fitness” to our fellow kind. The brands we choose are symbols that signify traits that mark our success and worth in the pecking order. And, like the peacock, we often have no conscious awareness of why we are doing it.

I have created a seven-step process to scientifically unveil how marketing really works. These are the seven steps:

That’s because when associations shift so do market shares because we learn and make decisions through vast neural networks of associative memory.

Take for instance what many industry experts consider the most brilliantly successful ad campaign of all time: the revered and reviled Marlboro Man.

When ad executive Leo Burnett conceived the cowboy he created the most remarkable about-face in ad history. Previously positioned for women, as a milder cigarette, the filter was even printed with a red band to hide lipstick stains, and the ads openly targeted feminine sensibilities with the ladylike slogan “Mild as May.”

The rugged, masculine symbolism of the American cowboy, transformed the brand’s image by claiming attributes about the character not the product. Offering intimations of rebellion, adventure, fearlessness, and strength, the ads celebrated the heroes and villains of the time popularized by Western films.

When the campaign rolled out nationally in 1955, sales jumped 3,241% to $5 billion and the Marlboro Man would become among the most widely recognized cultural symbols.

The explicit message of the Marlboro Country campaign was “come to where the flavor is,” but it was the flavor of the character that motivated smokers by offering oblique access to the defiant spirit of wranglers. While this may seem intuitive to ad creators up front, all too often marketers test these same ads with the wrong metrics on the back end, forgetting that “tastes good” and “makes me feel like a badass” are worlds apart.

And even when we uncover the deeper meaning with projective qualitative tools like storytelling, imagery, and metaphors, etc., we still can’t reliably measure these elusive associations in evaluative quantitative tests because respondents remain unaware of them or simply choose not to admit to them.

The challenge for marketers defies logic and awareness. We must identify sometimes illogical traits we unknowingly aspire to have as people and communicate those in advertising. Because it’s ironic that smoking can display our fitness to our social groups, but so, too, is the human mind.

Douglas Van Praet is the author of Unconscious Branding: How Neuroscience Can Empower (and Inspire) Marketing. He is also a marketing consultant whose approach to advertising and marketing draws from unconscious behaviorism and applies neurobiology, evolutionary psychology, and behavioral economics to business problems. He has worked at agencies in N.Y. and L.A., most recently as executive vice ppresident at Deutsch L.A., where his responsibilities included group planning director for the Volkswagen account.

Unconscious Branding

: How Neuroscience Can Empower

(and Inspire) Marketing

[Kindle Edition]

This is a really terrific book for marketing professionals who want to understand the difference between what consumers say versus what they do.

One of the few benefits of very long plane rides to Europe is a chance to read without interruptions. This week, I read a wonderful marketing book that I'd like to share with you. I'm really interested in understanding what consumers do versus what they say and this book has an unconventional approach to the topic.

I saw an article online by the author and it his ideas fit well with a marketing conference I was organizing with colleagues so I knew I had to learn more.

The book is called Unconscious Branding by Douglas Van Praet. He is the EVP at an award winning advertising agency Deutsch LA and he focuses on account planning and strategic insights. Douglas worked on the highly acclaimed and successful mini-Darth Vadar commercial for Volkswagon's Jetta where a little boy uses his super powers to start a car with the wave of a hand as an eager father with a remote helps him behind the scenes.

From my days at The Annenberg School of Communications at The University of Pennsylvania, I have always been interested in behavioral sciences, anthropology and non-verbal communications. Since the topic for this conference I mentioned above is focused on the huge discrepancy between what a consumer says in research versus their actual behavior, I hoped the book would provide some ideas and an approach to the issues.

I was not disappointed.

When I answer a question on a survey, how well can I actually answer a question like why I bought a product?

* How come I bought Seventh Generation not Tide for cleaning my clothes.
* How come I went to Starbucks not Dunkin Donuts for coffee?
* Why do I buy gas for my car at Exxon- even when it is cheaper at other stations?
* Why do I watch one commercial over and over but scan others?
* Why do I shop at Whole Foods instead of Harris Teeter?
* Why do I pick one wine over another?

I can tell you why I did these things but is it true? Can I accurately explain my motivation. A great example of this is buying gasoline. I stumbled upon the reason why I prefer Exxon even when it is a few cents more per gallon. I found a gas credit card from Esso (Exxon's earlier name) that my Dad gave to me when I started to drive in 1971. My connection goes way beyond the fuel and over the last 40 years, I have been driven on an unconscious level to go to an Exxon/Esso for gas. Of course, I never made that connection consciously until recently.

This is an exceptional well-written book that poses a fairly simple premise. How can neuroscience empower and inspire marketing. Another way of saying this is that instead of relying on what consumers say, understanding their behavior at an unconscious level can be powerful. How people act and the motivation for those actions can give clarity to a marketing professional to understand how to affect purchase behavior.

The book helps explains some of the core motivation behind our behaviors and our decision making process. He approaches marketing by trying to explain and understand how we act. Through fascinating examples of classic ad campaigns, he outlines the unconscious connection that helps make the effort so successful at touching consumers and motivating them to purchase.

The author has a seven step process that outlines:

1. The role of interrupting perceptual and behavioral patterns
2. How to create customers comforts with a brand
3. Lead the imagination to a desired conclusion or outcome
4. Shift consumer feeling in favor of a product
5. Satisfy the critical filter of resistance in the mind
6. Change the association by which memory and the mind work
7. Generate actions ingraining positive brand impressions that become second nature.

Best of all, this book treats consumers, target markets, demographics as human beings.
It is an important distinction since the author explains how human motivation at an unconscious level helps us understand how we can change attitudes and behaviors when we are marketing products. I like the human approach to marketing and the author articulates these idea like a mensch. (Yiddish for a really fine human being)

I learned from this book that the word emotion and motivation both come from the latin root to move. (movere). This helps us understand that key to both connecting emotional and motivating a behavior that taking action is required. When you touch a hot stove, you learn to stay away from the painful experience.

When a brand disappoints you by promising something and not delivering, you move away from that brand. Harnessing this insight can help you motivate a human (consumer) to take an action and move toward your brand and its solutions. The book is filled with examples from traditional and non-traditional advertising and marketing campaigns.

One case study in Unconscious Branding is the success of Red Bull.

The founder of Red Bull created a strange brew. His oddly flavored caffeine spiked beverage received the lowest scores in research for taste and purchase intention. Yet, the Australian born Dietrich Mateschitz understood the importance of emotional branding and motivational communications.

He created unique emotional experiences through experiential marketing that linked the product to the emotional rollercoaster of stimulating experiences. His recent Red Bull Stratos is one of the cleverest marketing events to associate emotion with a brand I have ever witnesses. This type of marketing connections puts Red Bull's Mateschitz in a class with Jobs and Apple whereby they make consumers connect not only to the physical product but at an unconscious level, plug into the brand's attitude. This is branding by masters.

So pick up a copy of Unconscious Branding. It is available at Amazon or your favorite independent bookstores but I bet you unconsciously knew that.

This is a really terrific book for marketing professionals who want to understand the difference between what consumers say versus what they do.

One of the few benefits of very long plane rides to Europe is a chance to read without interruptions. This week, I read a wonderful marketing book that I'd like to share with you. I'm really interested in understanding what consumers do versus what they say and this book has an unconventional approach to the topic.

I saw an article online by the author and it his ideas fit well with a marketing conference I was organizing with colleagues so I knew I had to learn more.

The book is called Unconscious Branding by Douglas Van Praet. He is the EVP at an award winning advertising agency Deutsch LA and he focuses on account planning and strategic insights. Douglas worked on the highly acclaimed and successful mini-Darth Vadar commercial for Volkswagon's Jetta where a little boy uses his super powers to start a car with the wave of a hand as an eager father with a remote helps him behind the scenes.

From my days at The Annenberg School of Communications at The University of Pennsylvania, I have always been interested in behavioral sciences, anthropology and non-verbal communications. Since the topic for this conference I mentioned above is focused on the huge discrepancy between what a consumer says in research versus their actual behavior, I hoped the book would provide some ideas and an approach to the issues.

I was not disappointed.

When I answer a question on a survey, how well can I actually answer a question like why I bought a product?

* How come I bought Seventh Generation not Tide for cleaning my clothes.
* How come I went to Starbucks not Dunkin Donuts for coffee?
* Why do I buy gas for my car at Exxon- even when it is cheaper at other stations?
* Why do I watch one commercial over and over but scan others?
* Why do I shop at Whole Foods instead of Harris Teeter?
* Why do I pick one wine over another?

I can tell you why I did these things but is it true? Can I accurately explain my motivation. A great example of this is buying gasoline. I stumbled upon the reason why I prefer Exxon even when it is a few cents more per gallon. I found a gas credit card from Esso (Exxon's earlier name) that my Dad gave to me when I started to drive in 1971. My connection goes way beyond the fuel and over the last 40 years, I have been driven on an unconscious level to go to an Exxon/Esso for gas. Of course, I never made that connection consciously until recently.

This is an exceptional well-written book that poses a fairly simple premise. How can neuroscience empower and inspire marketing. Another way of saying this is that instead of relying on what consumers say, understanding their behavior at an unconscious level can be powerful. How people act and the motivation for those actions can give clarity to a marketing professional to understand how to affect purchase behavior.

The book helps explains some of the core motivation behind our behaviors and our decision making process. He approaches marketing by trying to explain and understand how we act. Through fascinating examples of classic ad campaigns, he outlines the unconscious connection that helps make the effort so successful at touching consumers and motivating them to purchase.

The author has a seven step process that outlines:

1. The role of interrupting perceptual and behavioral patterns
2. How to create customers comforts with a brand
3. Lead the imagination to a desired conclusion or outcome
4. Shift consumer feeling in favor of a product
5. Satisfy the critical filter of resistance in the mind
6. Change the association by which memory and the mind work
7. Generate actions ingraining positive brand impressions that become second nature.

Best of all, this book treats consumers, target markets, demographics as human beings.
It is an important distinction since the author explains how human motivation at an unconscious level helps us understand how we can change attitudes and behaviors when we are marketing products. I like the human approach to marketing and the author articulates these idea like a mensch. (Yiddish for a really fine human being)

I learned from this book that the word emotion and motivation both come from the latin root to move. (movere). This helps us understand that key to both connecting emotional and motivating a behavior that taking action is required. When you touch a hot stove, you learn to stay away from the painful experience.

When a brand disappoints you by promising something and not delivering, you move away from that brand. Harnessing this insight can help you motivate a human (consumer) to take an action and move toward your brand and its solutions. The book is filled with examples from traditional and non-traditional advertising and marketing campaigns.

One case study in Unconscious Branding is the success of Red Bull.

The founder of Red Bull created a strange brew. His oddly flavored caffeine spiked beverage received the lowest scores in research for taste and purchase intention. Yet, the Australian born Dietrich Mateschitz understood the importance of emotional branding and motivational communications.

He created unique emotional experiences through experiential marketing that linked the product to the emotional rollercoaster of stimulating experiences. His recent Red Bull Stratos is one of the cleverest marketing events to associate emotion with a brand I have ever witnesses. This type of marketing connections puts Red Bull's Mateschitz in a class with Jobs and Apple whereby they make consumers connect not only to the physical product but at an unconscious level, plug into the brand's attitude. This is branding by masters.

So pick up a copy of Unconscious Branding. It is available at Amazon or your favorite independent bookstores but I bet you unconsciously knew that.

.

Research

—

You’re Doing It Wrong.

How Uncovering

The Unconscious Is Key To Creativity

By: Douglas Van Praet

If you think consumers are telling you what they want in traditional research, you’re wrong. Deutsch’s Douglas Van Praet argues that marketers must look to unconscious behavior for real creative breakthroughs.

Businesses invest billions of dollars annually in market research studies developing and testing new ideas by asking consumers questions they simply can’t answer. Asking consumers what they want, or why they do what they do, is like asking the political affiliation of a tuna fish sandwich. That’s because neuroscience is now telling us that consumers, i.e., humans, make the vast majority of their decisions unconsciously.

Steve Jobs didn’t believe in market research. When a reporter once asked him how much research he conducted to develop the iPad, he quipped, “None. It isn’t the consumers’ job to know what they want.” And according to some measures, the iPad became the most successful consumer product launch ever and Apple went on to become the most valuable company of all-time.

Marketers are living a delusion that the conscious mind, the self-chatter in their heads and the so-called “verbatims” in surveys and focus groups, are the guiding forces of action. They are talking to themselves, not to the deeper desires of people, rationalizing the need for the wrong tools aimed at the wrong target, and the wrong mind. They have hamstrung an industry based upon backwards thinking by encouraging concepts that beat the research testing system, rather than move people in the real world. Not surprisingly, there is a sea of sameness and mediocrity and merely 2 out of 10 products launched in the U.S. succeed. The truth is the unconscious mind, the seat of our motivations, communicates in feelings, not words.

Einstein once said: “The intuitive mind is a sacred gift and the rational mind is a faithful servant. We have created a society that honors the servant and has forgotten the gift.” Creativity, the indispensable fuel of economic growth, is being killed by a corporate culture of wrongheadedness. It’s time to stop the violence!

It’s time to honor the gift of the unconscious mind!

I know this firsthand because I have been “that guy.” I am a brand strategist, a market researcher, a sometimes bearer of bad news and unfortunately, a killer of creativity based upon flimsy reasoning and flawed research. “Don’t kill the messenger,” I’d jest. “Let’s kill your idea instead,” I’d mutter beneath my breath. My frustration with the tools of my trade led me to search for a more enlightening message.

I found it not in the research of marketers but in the research of cognitive authorities in evolutionary psychology, neurobiology, and behavioral economics. I became a behavioral change therapist specializing in unconscious behaviorism, helping people change their lives for the better, the same things they seek in brands. I reverse-engineered what I learned, starting with the things that were proven to yield real results in real people. I created a seven-step process to behavior change, one that I have been applying to ad strategies with remarkable success ever since.

These are the seven steps: 1) Interrupt the Pattern, 2) Create Comfort, 3) Lead the Imagination, 4) Shift the Feeling, 5) Satisfy the Critical Mind, 6) Change the Associations, and 7) Take Action.

These steps also explain the success of highly effective iconic campaigns created by those that have perhaps intuited these laws of influence. Take for instance the famous Old Spice campaign created by Wieden+Kennedy that leveraged the first of my seven steps: Interrupt the Pattern.

Freud once conceded: “Everywhere I go I find a poet has been there before me. Poets are masters of us ordinary men, in knowledge of the mind, because they drink at streams which we have not yet made accessible to science.” Great ad people are like these poets. Fortunately neuroscience is now empowering access to the streams of our collective unconscious, a new view that will help create and sell better ideas. Let’s deconstruct a brilliant case of effective use of “pattern interrupts.”

Brands are learned behaviors or expectations of outcomes based upon past experience that eventually become second nature. The pathway to our unconscious and the best way to learn something is through conscious attention. And nothing focuses our attention better than surprise and novelty. That’s because our brain is a pattern recognizer or prediction machine. It learns through the satisfying release of dopamine, the “feel good” chemical messenger of “wanting” behavior. And novelty activates this system. The purpose of this surge in dopamine is to draw attention to potentially important information and a possible new pattern by sending a signal to the brain to take notice and learn, which happens to be the key roles of advertising.

Old Spice transformed its stodgy image with an infectious campaign that was brimming with these pattern interrupts, creating a cooler contemporary image. It introduced the world to the charismatic hunk of Isaiah Mustafa or “the man your man could smell like.” The magic behind this effort is not just the smooth pitchman of body wash, but the equally smooth and unsuspecting “interrupts.”

One of these spots has the great-smelling Isaiah go, in the span of a mere 30 seconds, from standing at an outdoor shower to log rolling in the wilderness, to carrying a gourmet cake, to remodeling a kitchen with a power saw, to swan diving off a waterfall into a hot tub, and finally . . . as the walls of the hot tub collapse, we are left with him straddling a classically cool motorcycle. Our brains are surprised and amused . . . again and again and again . . . with the reward of dopamine and the payout of attention. With an amazing 1.4 billion impressions, it captured more than attention--it changed behavior, spiking sales over a year ago by 27% in the six months since the launch. One of the original commercials for this campaign alone has generated a massive 43 million views on YouTube to date.

These are not creative self-indulgences but hardworking devices that universally galvanize our focus and spark a rush of good vibes that we all instinctively share. And that dopamine high is essentially that elusive viral “buzz” marketers demand from their agencies but also make so difficult to create.

Douglas Van Praet is the author of Unconscious Branding: How Neuroscience Can Empower (and Inspire) Marketing. He is also Executive Vice President at agency Deutsch L.A., where his responsibilities include Group Planning Director for the Volkswagen account. Van Praet’s approach to advertising and marketing draws from unconscious behaviorism and applies neurobiology, evolutionary psychology and behavioral economics to business problems.

Research—You’re Doing It Wrong. How Uncovering The Unconscious Is Key To Creativity

By: Douglas Van Praet

If you think consumers are telling you what they want in traditional research, you’re wrong. Deutsch’s Douglas Van Praet argues that marketers must look to unconscious behavior for real creative breakthroughs.

Businesses invest billions of dollars annually in market research studies developing and testing new ideas by asking consumers questions they simply can’t answer. Asking consumers what they want, or why they do what they do, is like asking the political affiliation of a tuna fish sandwich. That’s because neuroscience is now telling us that consumers, i.e., humans, make the vast majority of their decisions unconsciously.

Steve Jobs didn’t believe in market research. When a reporter once asked him how much research he conducted to develop the iPad, he quipped, “None. It isn’t the consumers’ job to know what they want.” And according to some measures, the iPad became the most successful consumer product launch ever and Apple went on to become the most valuable company of all-time.

Marketers are living a delusion that the conscious mind, the self-chatter in their heads and the so-called “verbatims” in surveys and focus groups, are the guiding forces of action. They are talking to themselves, not to the deeper desires of people, rationalizing the need for the wrong tools aimed at the wrong target, and the wrong mind. They have hamstrung an industry based upon backwards thinking by encouraging concepts that beat the research testing system, rather than move people in the real world. Not surprisingly, there is a sea of sameness and mediocrity and merely 2 out of 10 products launched in the U.S. succeed. The truth is the unconscious mind, the seat of our motivations, communicates in feelings, not words.

Einstein once said: “The intuitive mind is a sacred gift and the rational mind is a faithful servant. We have created a society that honors the servant and has forgotten the gift.” Creativity, the indispensable fuel of economic growth, is being killed by a corporate culture of wrongheadedness. It’s time to stop the violence! It’s time to honor the gift of the unconscious mind!

I know this firsthand because I have been “that guy.” I am a brand strategist, a market researcher, a sometimes bearer of bad news and unfortunately, a killer of creativity based upon flimsy reasoning and flawed research. “Don’t kill the messenger,” I’d jest. “Let’s kill your idea instead,” I’d mutter beneath my breath. My frustration with the tools of my trade led me to search for a more enlightening message.

I found it not in the research of marketers but in the research of cognitive authorities in evolutionary psychology, neurobiology, and behavioral economics. I became a behavioral change therapist specializing in unconscious behaviorism, helping people change their lives for the better, the same things they seek in brands. I reverse-engineered what I learned, starting with the things that were proven to yield real results in real people. I created a seven-step process to behavior change, one that I have been applying to ad strategies with remarkable success ever since.

These are the seven steps: 1) Interrupt the Pattern, 2) Create Comfort, 3) Lead the Imagination, 4) Shift the Feeling, 5) Satisfy the Critical Mind, 6) Change the Associations, and 7) Take Action.

These steps also explain the success of highly effective iconic campaigns created by those that have perhaps intuited these laws of influence. Take for instance the famous Old Spice campaign created by Wieden+Kennedy that leveraged the first of my seven steps: Interrupt the Pattern.

Freud once conceded: “Everywhere I go I find a poet has been there before me. Poets are masters of us ordinary men, in knowledge of the mind, because they drink at streams which we have not yet made accessible to science.” Great ad people are like these poets. Fortunately neuroscience is now empowering access to the streams of our collective unconscious, a new view that will help create and sell better ideas. Let’s deconstruct a brilliant case of effective use of “pattern interrupts.”

Brands are learned behaviors or expectations of outcomes based upon past experience that eventually become second nature. The pathway to our unconscious and the best way to learn something is through conscious attention. And nothing focuses our attention better than surprise and novelty. That’s because our brain is a pattern recognizer or prediction machine. It learns through the satisfying release of dopamine, the “feel good” chemical messenger of “wanting” behavior. And novelty activates this system. The purpose of this surge in dopamine is to draw attention to potentially important information and a possible new pattern by sending a signal to the brain to take notice and learn, which happens to be the key roles of advertising.

Old Spice transformed its stodgy image with an infectious campaign that was brimming with these pattern interrupts, creating a cooler contemporary image. It introduced the world to the charismatic hunk of Isaiah Mustafa or “the man your man could smell like.” The magic behind this effort is not just the smooth pitchman of body wash, but the equally smooth and unsuspecting “interrupts.”

One of these spots has the great-smelling Isaiah go, in the span of a mere 30 seconds, from standing at an outdoor shower to log rolling in the wilderness, to carrying a gourmet cake, to remodeling a kitchen with a power saw, to swan diving off a waterfall into a hot tub, and finally . . . as the walls of the hot tub collapse, we are left with him straddling a classically cool motorcycle. Our brains are surprised and amused . . . again and again and again . . . with the reward of dopamine and the payout of attention. With an amazing 1.4 billion impressions, it captured more than attention--it changed behavior, spiking sales over a year ago by 27% in the six months since the launch. One of the original commercials for this campaign alone has generated a massive 43 million views on YouTube to date.

These are not creative self-indulgences but hardworking devices that universally galvanize our focus and spark a rush of good vibes that we all instinctively share. And that dopamine high is essentially that elusive viral “buzz” marketers demand from their agencies but also make so difficult to create.

Douglas Van Praet is the author of Unconscious Branding: How Neuroscience Can Empower (and Inspire) Marketing. He is also Executive Vice President at agency Deutsch L.A., where his responsibilities include Group Planning Director for the Volkswagen account. Van Praet’s approach to advertising and marketing draws from unconscious behaviorism and applies neurobiology, evolutionary psychology and behavioral economics to business problems.

Research—You’re Doing It Wrong. How Uncovering The Unconscious Is Key To Creativity

By: Douglas Van Praet

If you think consumers are telling you what they want in traditional research, you’re wrong. Deutsch’s Douglas Van Praet argues that marketers must look to unconscious behavior for real creative breakthroughs.

Businesses invest billions of dollars annually in market research studies developing and testing new ideas by asking consumers questions they simply can’t answer. Asking consumers what they want, or why they do what they do, is like asking the political affiliation of a tuna fish sandwich. That’s because neuroscience is now telling us that consumers, i.e., humans, make the vast majority of their decisions unconsciously.

Steve Jobs didn’t believe in market research. When a reporter once asked him how much research he conducted to develop the iPad, he quipped, “None. It isn’t the consumers’ job to know what they want.” And according to some measures, the iPad became the most successful consumer product launch ever and Apple went on to become the most valuable company of all-time.

Marketers are living a delusion that the conscious mind, the self-chatter in their heads and the so-called “verbatims” in surveys and focus groups, are the guiding forces of action. They are talking to themselves, not to the deeper desires of people, rationalizing the need for the wrong tools aimed at the wrong target, and the wrong mind. They have hamstrung an industry based upon backwards thinking by encouraging concepts that beat the research testing system, rather than move people in the real world. Not surprisingly, there is a sea of sameness and mediocrity and merely 2 out of 10 products launched in the U.S. succeed. The truth is the unconscious mind, the seat of our motivations, communicates in feelings, not words.

Einstein once said: “The intuitive mind is a sacred gift and the rational mind is a faithful servant. We have created a society that honors the servant and has forgotten the gift.” Creativity, the indispensable fuel of economic growth, is being killed by a corporate culture of wrongheadedness. It’s time to stop the violence! It’s time to honor the gift of the unconscious mind!

I know this firsthand because I have been “that guy.” I am a brand strategist, a market researcher, a sometimes bearer of bad news and unfortunately, a killer of creativity based upon flimsy reasoning and flawed research. “Don’t kill the messenger,” I’d jest. “Let’s kill your idea instead,” I’d mutter beneath my breath. My frustration with the tools of my trade led me to search for a more enlightening message.

I found it not in the research of marketers but in the research of cognitive authorities in evolutionary psychology, neurobiology, and behavioral economics. I became a behavioral change therapist specializing in unconscious behaviorism, helping people change their lives for the better, the same things they seek in brands. I reverse-engineered what I learned, starting with the things that were proven to yield real results in real people. I created a seven-step process to behavior change, one that I have been applying to ad strategies with remarkable success ever since.

These are the seven steps: 1) Interrupt the Pattern, 2) Create Comfort, 3) Lead the Imagination, 4) Shift the Feeling, 5) Satisfy the Critical Mind, 6) Change the Associations, and 7) Take Action.

These steps also explain the success of highly effective iconic campaigns created by those that have perhaps intuited these laws of influence. Take for instance the famous Old Spice campaign created by Wieden+Kennedy that leveraged the first of my seven steps: Interrupt the Pattern.

Freud once conceded: “Everywhere I go I find a poet has been there before me. Poets are masters of us ordinary men, in knowledge of the mind, because they drink at streams which we have not yet made accessible to science.” Great ad people are like these poets. Fortunately neuroscience is now empowering access to the streams of our collective unconscious, a new view that will help create and sell better ideas. Let’s deconstruct a brilliant case of effective use of “pattern interrupts.”

Brands are learned behaviors or expectations of outcomes based upon past experience that eventually become second nature. The pathway to our unconscious and the best way to learn something is through conscious attention. And nothing focuses our attention better than surprise and novelty. That’s because our brain is a pattern recognizer or prediction machine. It learns through the satisfying release of dopamine, the “feel good” chemical messenger of “wanting” behavior. And novelty activates this system. The purpose of this surge in dopamine is to draw attention to potentially important information and a possible new pattern by sending a signal to the brain to take notice and learn, which happens to be the key roles of advertising.

Old Spice transformed its stodgy image with an infectious campaign that was brimming with these pattern interrupts, creating a cooler contemporary image. It introduced the world to the charismatic hunk of Isaiah Mustafa or “the man your man could smell like.” The magic behind this effort is not just the smooth pitchman of body wash, but the equally smooth and unsuspecting “interrupts.”

One of these spots has the great-smelling Isaiah go, in the span of a mere 30 seconds, from standing at an outdoor shower to log rolling in the wilderness, to carrying a gourmet cake, to remodeling a kitchen with a power saw, to swan diving off a waterfall into a hot tub, and finally . . . as the walls of the hot tub collapse, we are left with him straddling a classically cool motorcycle. Our brains are surprised and amused . . . again and again and again . . . with the reward of dopamine and the payout of attention. With an amazing 1.4 billion impressions, it captured more than attention--it changed behavior, spiking sales over a year ago by 27% in the six months since the launch. One of the original commercials for this campaign alone has generated a massive 43 million views on YouTube to date.

These are not creative self-indulgences but hardworking devices that universally galvanize our focus and spark a rush of good vibes that we all instinctively share. And that dopamine high is essentially that elusive viral “buzz” marketers demand from their agencies but also make so difficult to create.

Douglas Van Praet is the author of Unconscious Branding: How Neuroscience Can Empower (and Inspire) Marketing. He is also Executive Vice President at agency Deutsch L.A., where his responsibilities include Group Planning Director for the Volkswagen account. Van Praet’s approach to advertising and marketing draws from unconscious behaviorism and applies neurobiology, evolutionary psychology and behavioral economics to business problems.

Wednesday, 7 March 2018

Reset, Italian Style - Essential and Expendable

Having spent quite a quite a bit of time a couple of weeks ago with people at or near the top of private sector organizations in Milan and Rome, I can see how the economic crisis has spread to Italy, but also how there is enough distinctiveness in the Italian economy and society so that its manifestations are different than they are in the US.

First, there is no widespread banking crisis. The Italian banking system is heavily regulated and heavily locally-owned and operated. They are what New York Times columnist Paul Krugman called "boring banks" in a terrific piece he wrote last month.

Most Italian bankers are local businessmen and businesswomen, known in their community, and knowledgeable about the folks to whom they lend money. They are conservative and risk-averse. They are comfortable but not rich. They did not do credit default swaps.

I can remember the days of post-world War II boring banking in the US. Within a four block area in Coolidge Corner in Brookline, MA, there were four or five banks, all locally or regionally owned and run. The men (all men, as best as I can remember) who ran those banks were active in the community, involved in civic life. I watched those local banks then get gobbled up and disappear, sometime around the 1980s, replaced by huge national bank companies and equally impersonal ATM machines. Krugman thinks we need to get back to boring banks. Seems right to me. But there are a lot of folks in the banking industry who will fight for the deregulated world that made them fabulously rich until their house of cards crashed last year. But my guess is that a highly-regulated locally-driven banking system will be one of the consequences of the current turmoil. (And, further, but the subject of another post, the pattern of heavy regulation plus lots of local autonomy will be part of Reset going forward, not only in the banking industry, but in government-private sector relations generally and in individual organizations as well.)

This is not about nostalgia for the good old days. It is about the function of banks in the economy, providing capital for businesses to invest and grow and for families to buy and fix up their homes. Deregulation led to consolidation, fostering the morphing from savings and loan institutions to venture capitalists, creating incentives for the banks to feed the financial bubble. And despite James Surowiecki's characteristically insightful piece in this week's New Yorker about the need for capital to drive the economy, seems to me there will always be institutions and people with lots of money to fuel big growth, but that local banks serve a different and critical purpose for ordinary folks, as Italy illustrates.

Italy has also been somewhat insulated because the Italian economy was lagging behind most of its counterparts in the European Union. There was no consumption frenzy. Families were not inundated with debt. Houses are not under water. There was not too far to fall.

What makes Italy so appealing - and sometimes so frustrating, to both natives and expats - is that the country seems to have been determined to hold on to its definition of the good life as having more to do with family and food than money and materialism. Last time I looked, for example, Italy had the highest number of hours working per person and the lowest productivity rate in the European Union. Italian workers talk a lot, take long lunches, and generally enjoy themselves. As our contractor said to us "W e Italians like to start things, We are not so interested in finishing."

Italy is feeling the pinch in industries that rely on exports of consumption goods or imports of visitors. We have a friend who owns and rents a fabulous villa - take a look, it is really extraordinary - on the Amalfi coast whose bookings through agencies are down dramatically, although he has managed to keep his business thriving through his own contacts and repeat clients. And in my time in Italy, I talked with folks from industries like fashion and automotive parts who are seeing orders from Japan and the US plummeting. Fiat seems to be a notable exception, seeing this worldwide crisis as an opportunity not to hunker down, but to Reset and make big bets on the future, with investments in Chrysler and perhaps General Motors that, if they work and the economy recovers, will make Fiat a worldwide industry leader.

But Italy has considerable cultural constraints to moving forward in the current turmoil. Domenico Bodega, Professor of Organization Theory and Dean of the Faculty of Economics at Catholic University in Milan, was a respondent at one of the sessions I did for a group of 150 private sector corporate big-wigs in Milan. I spoke about Reset and the need for adaptive leadership in the current reality. Bodega responded that Italy is not well positioned to adapt because of deeply held norms which get in the way of an adaptive response: deference to authority, reliance on charismatic leadership, discomfort with uncertainty, a culture of alibi, and an aversion to efficiency in use of time.

Selfishly, of course, I do not want Italy to adapt too much or too quickly. I love it there in part because of those eccentricities, such as the values on food and family and relationships, which permit me to relax as soon as I land at the airport in Rome.

The challenge for Italy is the challenge that we all face in adaptation, in thriving under uncertainty

:

Do we have the courage, and the skill, to separate the essential from the expendable?

Can we make good, tough choices about of all that we value, what to keep and what to leave behind?

Can Italy preserve what makes it so special and move away from those practices and norms which are holding it back, as it responds to the growth pressure from the European Union and from its own citizens, and its reliance on exports and tourism?

Sunday, 25 February 2018

Innovation Isn't About New Products,

It's About Changing Behavior

July 31, 2012

Companies are not so much in the business of what we buy, but the way we act.

Behavior is the unknowable variable in every innovation, and it is the variable that most determines

the opportunity a new business model has to evolve and take advantage of the new behavior.

It's The Behavior, Stupid

We are at the tail end of an era that has focused almost entirely on the innovation of products and services, and we are at the beginning of a new era that focuses on the innovation of what I like to call "behavioral business models." These models go beyond asking how we can make what we make better and cheaper, or asking how we can do what we do faster. They are about asking why we do what we do to begin with. And the question of why is almost always tied to the question of how markets behave.

When Apple created iTunes it didn't just create a faster, cheaper, better digital format for music, it altered the very nature of the relationship between music and people. eBay did not just create a platform for auctions, it changed the way we look at the experience of shopping and how community plays a role in the experience. When GM created OnStar it didn't just make getting from point A to point B faster, it changed the relationship between auto manufacturer and buyer, and fundamentally altered the reason we buy a car.

Google did not invent Internet search--there were nearly fifty software vendors delivering Internet-based search, some for as long as twenty-five years before Google!--but Google changed the way we interact with the Internet and how our behaviors are tracked and analyzed, allowing advertisers to find and pay for buyers in a way that was inconceivable before.

All of these are examples of innovations in behavior that led to entirely new business models. Yet we continue to be obsessed with technology innovation. To paraphrase James Carville's now-popular political pun, "It's not the technology, it's the behavior, stupid."

The greatest shift in the way we view innovation will be that the innovation surrounding behavior will need to be as continuous a process as the innovation of products has been over the last hundred years. The greatest shift in the way we view innovation will be that the innovation surrounding behavior will need to be as continuous a process as the innovation of products has been over the last hundred years. It's here that the greatest payback and value of innovation in the cloud has yet to be fully understood and exploited.

Unfortunately, far too many of us are stuck in an old model of innovation--just as surely as we are stuck in line waiting to take part in the new one.

Innovative organizations are those that can depart quickly from their planned trajectory and jump onto a new opportunity; they're organizations that recognize and take an active role in introducing new behaviors that were unknown. It is ultimately the speed with which companies do this and the willingness to experiment in new and unanticipated areas that determines the extent to which their innovation is "open."

This changes the idea of open innovation to mean more than going outside the company to ﬁnd new ideas from experts; it means developing a collaborative innovation model that intimately binds the market to the process of innovation, in lockstep. That does not suggest that companies are held hostage by their customers, who only know to ask for incremental innovation in what they have already experienced.

Instead, it means that companies need to push the envelope of innovation based on observations of what a market's behaviors are and then work closely with the market to identify how innovations can add value in unexpected ways.

Cloud-based Dialogue

The cloud is the ultimate open system for this sort of innovation, one that is inﬂuenced by factors that are both unknown and unknowable. In other words, no amount of time, information, focus groups, or traditional market research will increase the certainty with which we can innovate. The most important thing to do in the cloud is to realize that innovation must involve openness and disruption. Then we have to minimize the risk and uncertainty so that we increase the opportunity for ﬁnding novel approaches to solving problems and expand the ability to quickly scale, so we can address these problems once a resonant nerve is struck.

This is precisely the type of innovation that companies like Apple, Google, Facebook, and Netﬂix have enabled by constantly challenging their customers to adapt to new offerings. For Facebook, this creates a fairly consistent market tension. Whenever Facebook delivers a new feature, such as its Timeline capability, there is an almost immediate market backlash, which is followed by a lull in the market's pushback and an eventual acceptance and integration of the new capability. This dance is repeated on a regular basis. While it does create some degree of tension, the result is a steady disruptive force that provides both Facebook and its users with more than just a path to sustained innovation; it also provides a periodic jump to a new type of behavior that would otherwise be seen by most companies as incredibly risky. The beneﬁt for Facebook is that it has a built-in cloud that allows any innovation to be immediately presented to its customers.

This sort of dialogue between the marketplace and its providers has never been as pronounced and apparently dysfunctional as it is in the cloud, where voices are ampliﬁed to an unprecedented magnitude. But that increase in the decibel level of market pushback can be a death knell for innovation. The cloud is not inherently for or against innovation, any more than the Internet is. Both are simply new platforms for conversations that have the power to drive both positive and negative momentum. And it is this point that companies need to pay especially close attention to.

Successful innovators don’t just ask customers and clients to do something different;

they ask them to become someone different.

Facebook asks its users to become more open and sharing with their personal information, even if they might be less extroverted in real life. Amazon turned shoppers into information-rich consumers who could share real-time data and reviews, cross-check prices, and weigh algorithmic recommendations on their paths to online purchase. Who shops now without doing at least some digital comparisons of price and performance?

Successful innovators ask users to embrace--or at least tolerate--new values, new skills, new behaviors, new vocabularies, new ideas, new expectations, and new aspirations. They transform their customers. Successful innovators reinvent their customers as well as their businesses. Their innovations make customers better and make better customers.

Google provides an excellent example of understanding and acting on "The Ask." The company’s PageRank algorithms--honed and polished by cofounders Larry Page and Sergey Brin--fundamentally redefined the power and potential of Internet "search." Google’s link-based architecture quickly became more than the world’s most successful search engine. The technology effectively made its users partners and collaborators. The multibillion-dollar innovation investments made in hardware, software, and network technologies were also investments in the collective intelligence of Google’s users.

"Google gets smarter every time someone makes a link on the web," declared Tim O’Reilly, the publisher and Internet investor who coined the Web 2.0 sobriquet. "Google gets smarter every time someone makes a search. It gets smarter every time someone clicks on an ad. And it immediately acts on that information to improve the experience for everyone else. It’s for this reason I argue that the real heart of Web 2.0 is harnessing collective intelligence."

O’Reilly is correct. Google’s algorithms continuously build on its customers’ collective intelligence. What makes the company’s collective intelligence algorithms so brilliant, says Google research vice president Alfred Specter, is that Google is constantly learning from--as well as about--its users. This is Google’s distinguishing competence.

Consequently, declaring that Google is in the search business radically misunderstands both its technology and business model.

Google is just as heavily committed to the "searcher" business.

The company has created and refined literally hundreds of millions of searchers even as it performs hundreds of billions of searches. Google continuously improves the quality of its search by improving the capabilities of its searchers--and vice versa. As Google’s searchers grow smarter and more sophisticated, so does Google. Win/win.

This enormous global pool of new and improved human capital had never before been profitably tapped. Google has reaped disproportionate returns not just on its capital investments in search software and silicon but from its human capital investments in searchers.

Just as Henry Ford’s automobiles created a new nation of drivers, Google’s search engine(s) networked a new world of searchers. Just as improving a car demands a different design sensibility than improving drivers, enhancing search poses technically distinct challenges from enhancing searchers. Henry Ford mass-produced drivers; Sergey Brin and Larry Page globally interconnected searchers.

These entrepreneurs redefined and transformed the customer capabilities of their eras. They create a new vision of the customer.
"Larry [Page] is into making people what he wants them to be," says a former Google executive who worked closely with the cofounders, "which is a little smarter."

So what did young Google’s search innovations ask its customers to become?

Google asked customers to become people who wouldn’t think twice about spending a few moments to type in some words on their computer--don’t worry about typos!--and quickly scan the list of clickable links that instantly appeared. They could be confident their brief time commitment would give them exactly the link(s) they wanted or needed. Google’s innovation asked its users to become "instant searchers." All for free. Hasta la vista AltaVista.

For anyone with Internet access, Google’s initial "innovation ask" was simple, easy, and low maintenance. (Compared to, say, Ford Motor’s Model T innovator’s ask, which minimally required the purchase of a horseless carriage, and the time and effort to learn how to drive.)
The genius of Google’s innovation ask--and, make no mistake, it is genius--was making impatience a customer virtue. Google aspired to create impatient users who expected easy interfaces and ridiculously fast results at absolutely zero cost. Google left for its rivals the more patient and plodding souls who didn’t mind paying a small price for slightly more complicated interfaces, noticeably slower response times, and pretty good results. Of course, those customers turned out not to exist.

The result? Google’s competitors have been forced to live search environments defined by Google’s innovator’s ask. They’ve been forced to come up with differentiating innovator’s asks of their own. Microsoft’s Bing and Wolfram’s Alpha both understand that their asks--not just their technologies--need to distinguish them from Google. They have no choice but to ask their users to become a measurably different kind of searcher. They need to complement as well as compete. They need to distinctively invest in their users. This goes beyond traditional strategy and branding.

Classic MBA innovation marketing or marketing innovation analysis would say Google fulfilled a latent or explicit need and/or delighted its customers. This is not wrong. But it is woefully inadequate. The innovator’s ask suggests Google dramatically redefined--even recreated--the market by simultaneously training and successfully learning from its customers.

Google’s ease of search and collective intelligence algorithms represent ongoing innovation investments in the human capital and capabilities of customers. Google successfully made its hundreds of millions of customers more valuable to the company and for each other. Google’s advertising-based business model monetized the innovation investments it had made.

Google is, indeed, in the search business, but its future success remains predicated on customers becoming better, more frequent, more discriminating, and more engaged searchers. The human capital is king. Google’s customers did become a little smarter. Maybe a lot smarter.

Michael Schrage, a research fellow at MIT Sloan School’s Center for Digital Business, is the author of Serious Play. He has consulted on innovation issues for Microsoft, Facebook, Google, IBM, Procter & Gamble, China Mobile, Disney, and Intuit, to name a few.

SB: How can a lecturer use attention, but make sure not to abuse it? Or put another way, does repetitive use of phasic alertness, getting an audience to refocus their attention ever few minutes, have declining effects over time?

JM: I do not believe in entertainment in teaching, during the holy time information is being transferred from one person to another. I do believe in engagement, however, and there is one crucial distinction that separates the two: the content of the emotionally competent stimulus (“hook”). If the story/anecdote/case-history is directly relevant to the topic at hand (either illustrating a previously explained point or introducing a new one), the student remains engaged. Cracking a joke for the sake of a break, or telling an irrelevant anecdote at a strategic time is a form of patronizing, and students everywhere can detect it, usually with resentment, inattention or both.

Do you think the size of a classroom has any effect on students ability to pay attention? Does Posner’s model of attention change if we are alone in conversation, vs. in an audience of 99 other people listening to a lecture?

I don’t think the size of the classroom has anything to do with the functional neural architecture proposed by Posner, but there is a universe of difference in how it behaves. The behavior has to do with our confounded predilection for socializing. People behave very differently in large crowds than they do in small crowds or even one on one. Very different teaching strategies must be deployed for each.

Bligh’s book “What’s the use of Lectures?” identifies 18-25 minutes, based on his assesment of psychology studies, as the key breakpoint for human attention in classrooms. Whether it’s 10 or 25, why do you think so few schools or training events use these sized units as the structure for their days, or their lessons?

I don’t know why schools don’t pay attention to attention. Perhaps it is a lack of content knowledge. If I had my way, every teacher on the planet would take two courses: First, an acting course, the only star in the academic firmament capable of teaching people how to manipulate their bodies and voices i to project information. Second, a cognitive neuroscience course, one that teaches people how the brain learns, so teachers can understand that such projections follow specific rules of engagement.

Made in the USA

:

More Complex Than You Think

In today's political discourse, it is commonplace for the left to berate major corporations for moving jobs overseas in order to pad profits through the use of cheap labor. It is an argument that easily fits into sound bites and is used to explain away rising unemployment and the death of manufacturing in America. The right, on the other hand, blames organized labor and a draconian tax code for preventing America from competing on an even playing field. The truth, however, is much more complex. A recent eye-opening article in the New York Times examines Apple, one of our nation's great corporate icons, and discusses why the company had no choice but to produce their best-selling gadgets overseas.

What I found most interesting about the article was that, according to Apple executives and other experts briefed on the matter, neither one of those political arguments carries much weight. The reality is that Apple, adhering to a sound business model, strives to make the best products possible at a quality level that is second to none. Unfortunately, the workforce and infrastructure in the United States is not up to the task.

I was surprised that there was no mention of a greedy corporate culture choosing to rely on slave labor, nor was there mention of organized labor making it impossible to have a flexible work force. The reality is that Apple simply could not find enough skilled labor in the United States to make the complex technical products that are the reality in today's gadget-hungry marketplace. When Steve Jobs told President Obama that "those jobs aren't coming back," it was not because Apple couldn't make a profit manufacturing in America, it was because America simply did not have the labor and capital resources to build an iPhone.

According the article, more than 8,700 industrial engineers are required to oversee the iPhone supply chain. In the United States, it would have taken nine months to find that many engineers. In China, it took 15 days. That is a damning indictment of the American education system if a company cannot find enough skilled workers to build an iPhone. Skilled workers are supposed to be America's strength, while the common explanation is that our lack of unskilled workers and uncompetitive wages are the real problem.

While a lack of American skilled labor is a hindrance, it is our nation's lack of infrastructure that makes managing a fast-paced, flexible, and highly technical supply chain extremely difficult. According to Apple executives quoted in the article, "Asian supply chains have surpassed what's in the U.S." The result is that, "... we can't compete at this point." That has left America to rely on its service sector, which can provide steady jobs; but it relies too heavily on domestic consumption, does not provide upward mobility, and doesn't require technical skills to add innovation to the economy.

Putting the political rhetoric aside, our nation has some soul searching to do if it really intends to compete in a globalized economy. If America's leaders want to see us regain our manufacturing dominance, the answer is not as easy as "right-to-work" laws, a changing tax code, or currency manipulation. Germany, for example, has proven that it can make quality exports and still pay decent wages and benefits to its workers.

The real answer lies in teaching Americans the skills they need to compete globally, such as vocational schools focusing on supply-chain manufacturing and a focus on the type of industrial engineering required for large-scale production. Finally, we need a public policy environment where emphasis is placed on infrastructure modernization, allowing us to produce and move goods much more quickly and flexibly. This means an upgraded rail infrastructure, smart power grids, incentives to build modern factories, and an investment in energy production that will reduce the cost of doing business.

Claudio Dematté

Forget Manufacturing

-- "Those" Jobs Aren't Coming Back --

With more than 14 million people seeking work in the U. S. there is clearly a critical need for more jobs. Numerous politicians have stated that the number one concern of the U.S. government should be to focus on this initiative, particularly within the manufacturing sector.

But is manufacturing really the right place

to zero in?

I would argue that there are a number of reasons that prove otherwise.

There is no doubt that the U.S. has lost millions of manufacturing jobs over the last several decades and the recession only continues to accelerate this increase. However, often overlooked in the discussion is the fact that many U.S. manufacturing corporations are performing quite well financially. The auto industry, for example, is doing very well in part because they have reduced labor costs dramatically over the last decade -- in fact, General Motors recently reported that its labor manufacturing costs have gone from 30 to 10 percent.

There are two reasons why General Motors, Ford and many others have significantly reduced labor costs. One, perhaps the most obvious, is the advancement of new technologies. Machinery and computers have replaced people throughout most modern facilities. Second, management innovations have reduced the need for labor and as a result require fewer employees. Employee involvement, slim organization structure and work teams have made jobs more interesting, demanding and challenging while reducing the number of individuals needed. Employees are now cross-trained; performing duties outside of their main focus including maintenance, set-up and operations management. They do not have to wait for someone to come to repair a machine or reprogram it; these services are now built into a company's strategic staffing plans.

The implication of the changes in technology and management for job growth

within the field of manufacturing is clear.

There are always going to be fewer jobs

in most manufacturing plants.

While there will be some growth, it will only extend to the degree that it is absolutely necessary in order for an organization to increase its production levels. Even when companies need to increase production levels, most are likely to add very few workers as they have learned to get along with less. Ironically, this has cost but also saved U.S. jobs. Without the management innovations and technology advancements that have been adopted by U.S. corporations even more jobs would have been lost to low wage countries.

Some might pose the question: what about the jobs that have been off shored?Will they come back? This may happen to a limited degree however, many of these are low valued added jobs that are best done in low wage economies only likely to return to the U.S. if they can be completed at an inexpensive rate that works alongside new technologies and management practices.

Overall, the combination of new practices has significantly transformed manufacturing in the U.S. Today, much of the manufacturing work that remains is high valued added and increasingly requires skilled labor. Gone to less developed, low wage economies are the labor intensive and low value added manufacturing positions that once accounted for

many U.S. manufacturing jobs;

they are not coming back.

The U.S. has to move beyond its focus

on manufacturing jobs

to find the answer

to our unemployment problem.

low valued added jobs that are best done

. in low wage economies

Edward E. Lawler III is a distinguished professor of business at the University of Southern California (USC) Marshall School of Business and founder/director of the University’s Center for Effective Organizations (CEO), one of the country’s leading management research organizations.

He’s authored more than 40 books, including his most recent –

- Management Reset: Organizing for Sustainable Effectiveness

(Jossey-Bass, March 2011).

(2011) Jossey-Bass, CA

In their earlier book, Built to Change, a title that read more like a ‘me too’ product, Lawler and Worley had already challenged conventional thinking around organizational agility and response to change. Designs for organizational excellence in the past meant that a goal for change was constant relative to the time in which change could be accomplished. Efforts in adaptation would be around such goals that fostered relatively newer forms of stability. The root for change effectiveness was identified as the need for stability. That structure, strategy and organizational design had to simultaneously change as the environment changed was called out in Built to Change.

Even as they discovered the effects management practices were having on institutions at large, they realized that some of man’s irreversible choices were depleting finite resources. Sustainability of organizations now became inextricably intertwined with leadership as a team sport and shared goals and values were significant part of that journey.

Management Reset is about embracing the complexity required to be a sustainable organization.

“It is now clear that financial sustainability is a necessary but insufficient organization objective”, write the authors, thus opening up the possibility of reading through some refreshingly fundamental aspects of designing organizations for economic, social and environmental sustainability.

Written for consultants who advise organizations on strategy and change, the authors want it to be read by academics who are concerned with organization design, organization development and change. They consider it as the third major management reset since the beginning of the twentieth century.

Command and Control (CCO) organizations responded to volume needs of capitalistic markets with bureaucratic controls. High Involvement organizations (HIO) showed the advantages of tapping into human beings latent potential in the second management reset. CCOs and HIOs are designed to be stable. Few have appreciated thus far, how sharply we will have to deviate from management approaches of the past in order to be sustainable. Fewer have explored its impact for strategy, structure, decision-making practices, human resource management and leadership.

A Sustainably Managed Organisation (SMO) requires an integrated approach, far different from the fashion equivalent of putting lipstick on a pig. The mindset for the CCO and the HIO is normally one of compliance, and the tension of interests between shareholders and larger stakeholders like society and natural resources remains. Hence the case in this book is made for the integration between agility of the HIO and responsibility of the SMO.

Forces of Agility

Technology – Virtual presence technology is proliferating, closing the distance between people and challenging the concept of time. The amount of research and knowledge produced is also increasing, pushing the boundaries of change and innovation.

Globalization –The disappearance of host and parent status in manufacturing and research centers have forced organizations to continuously to modify services and products they offer as also where and how to produce them to enable access for their customers

Workforce – Gender, national origin, race, age and language have come to acquire more central place for attribution to success. Life-span of employment varies not merely by economic development of the host economy, but also with sector of employment, age discrimination laws and financial wherewithal to retire

Talent, Intellectual Property, Brand Image are more perishable and require a different mind-set to manage as these feed off each other. Knowledge work is harder to direct, measure and perform.

Designing for outcomes that may ensure Organizational agility required to evolve from CCOs and HIOs into SCOs are the basic tenets of this book

The authors quote potential for being SMOs from among their researched organizations. Patagonia, PepsiCo and Unilever are featured for example.

The book points towards

:

A) the way Value is created which includes strategies for sustainable effectiveness

as against sustainable competitive advantages

B) the way work is organized, including Governance at the Board, Structures for Operations and sustainable work systems, as against conventional control through job designs;

the focus being on organizational designs that dynamically adapt to business environments

C) the way People are treated which includes new notions on Performance,

reward systems and management of Talent for SMOs and

d) the way Behavior is guided which entails orchestrating performance through leadership as teamwork, and the transformation to sustainable management where followership is imperative to leadership as well.

Contrary to popular perception, and much against conventional intuition, research on leadership development is quite clear that experience is the best developer of managers and leaders.

The development of ‘crucible’ jobs that could provide learning experiences may seem outright first choices for learning designs, but one may fail to realize that moving people from one job to the next rapidly will rob people of requisite learning to overcome quick-fix mentalities - the kind that gloss over long-term impact of actions. What brought us here in terms of short-term thinking that focused selectively on the customer and shareholder will in this sense prevent us from reaching the SMO prototype.

Change Acceleration towards SMO transformations is facilitated by models, language, frameworks and practices that help people talk about and discuss the relevance of change to their work. Formal processes that facilitate learning from experience will be the key to both crucible experiences and the realization of the emerging identity of the organization. The interconnectedness of different social systems in a global world is brought about clearly in this book. It may take an evolved leadership team to embrace the message in this book.

Not long ago, Apple boasted that its products were made in America. Today, few are. Almost all of the 70 million iPhones, 30 million iPads and 59 million other products Apple sold last year were manufactured overseas.

Why can’t that work come home? Mr. Obama asked.

The president’s question touched upon a central conviction at Apple. It isn’t just that workers are cheaper abroad. Rather, Apple’s executives believe the vast scale of overseas factories as well as the flexibility, diligence and industrial skills of foreign workers have so outpaced their American counterparts that “Made in the U.S.A.” is no longer a viable option for most Apple products.

Apple has become one of the best-known, most admired and most imitated companies on earth, in part through an unrelenting mastery of global operations. Last year, it earned over $400,000 in profit per employee, more than Goldman Sachs, Exxon Mobil or Google.

However, what has vexed Mr. Obama as well as economists and policy makers is that Apple — and many of its high-technology peers — are not nearly as avid in creating American jobs as other famous companies were in their heydays.

Apple employs 43,000 people in the United States and 20,000 overseas, a small fraction of the over 400,000 American workers at General Motors in the 1950s, or the hundreds of thousands at General Electric in the 1980s. Many more people work for Apple’s contractors: an additional 700,000 people engineer, build and assemble iPads, iPhones and Apple’s other products. But almost none of them work in the United States. Instead, they work for foreign companies in Asia, Europe and elsewhere, at factories that almost all electronics designers rely upon to build their wares.

“Apple’s an example of why it’s so hard to create middle-class jobs in the U.S. now,” said Jared Bernstein, who until last year was an economic adviser to the White House.

“If it’s the pinnacle of capitalism, we should be worried.”

Apple executives say that going overseas, at this point, is their only option. One former executive described how the company relied upon a Chinese factory to revamp iPhone manufacturing just weeks before the device was due on shelves. Apple had redesigned the iPhone’s screen at the last minute, forcing an assembly line overhaul. New screens began arriving at the plant near midnight.

A foreman immediately roused 8,000 workers inside the company’s dormitories, according to the executive. Each employee was given a biscuit and a cup of tea, guided to a workstation and within half an hour started a 12-hour shift fitting glass screens into beveled frames. Within 96 hours, the plant was producing over 10,000 iPhones a day.

“The speed and flexibility is breathtaking,” the executive said.

“There’s no American plant that can match that.”

Similar stories could be told about almost any electronics company — and outsourcing has also become common in hundreds of industries, including accounting, legal services, banking, auto manufacturing and pharmaceuticals.

But while Apple is far from alone, it offers a window into why the success of some prominent companies has not translated into large numbers of domestic jobs. What’s more, the company’s decisions pose broader questions about what corporate America owes Americans as the global and national economies are increasingly intertwined.

“Companies once felt an obligation to support American workers, even when it wasn’t the best financial choice,” said Betsey Stevenson, the chief economist at the Labor Department until last September. “That’s disappeared. Profits and efficiency have trumped generosity.”

Companies and other economists say that notion is naïve. Though Americans are among the most educated workers in the world, the nation has stopped training enough people in the mid-level skills that factories need, executives say.

To thrive, companies argue they need to move work where it can generate enough profits to keep paying for innovation. Doing otherwise risks losing even more American jobs over time, as evidenced by the legions of once-proud domestic manufacturers — including G.M. and others — that have shrunk as nimble competitors have emerged.

Apple was provided with extensive summaries of The New York Times’s reporting for this article, but the company, which has a reputation for secrecy, declined to comment.

This article is based on interviews with more than three dozen current and former Apple employees and contractors — many of whom requested anonymity to protect their jobs — as well as economists, manufacturing experts, international trade specialists, technology analysts, academic researchers, employees at Apple’s suppliers, competitors and corporate partners, and government officials.

Privately, Apple executives say the world is now such a changed place that it is a mistake to measure a company’s contribution simply by tallying its employees — though they note that Apple employs more workers in the United States than ever before.

They say Apple’s success has benefited the economy by empowering entrepreneurs and creating jobs at companies like cellular providers and businesses shipping Apple products. And, ultimately, they say curing unemployment is not their job.

“We sell iPhones in over a hundred countries,”

a current Apple executive said.

“We don’t have an obligation to solve America’s problems.

Our only obligation is making the best product possible.”

‘I Want a Glass Screen’

In 2007, a little over a month before the iPhone was scheduled to appear in stores, Mr. Jobs beckoned a handful of lieutenants into an office. For weeks, he had been carrying a prototype of the device in his pocket.

Mr. Jobs angrily held up his iPhone, angling it so everyone could see the dozens of tiny scratches marring its plastic screen, according to someone who attended the meeting. He then pulled his keys from his jeans.

People will carry this phone in their pocket, he said. People also carry their keys in their pocket. “I won’t sell a product that gets scratched,” he said tensely. The only solution was using unscratchable glass instead. “I want a glass screen, and I want it perfect in six weeks.”

After one executive left that meeting, he booked a flight to Shenzhen, China. If Mr. Jobs wanted perfect, there was nowhere else to go.

For over two years, the company had been working on a project — code-named Purple 2 — that presented the same questions at every turn: how do you completely reimagine the cellphone? And how do you design it at the highest quality — with an unscratchable screen, for instance — while also ensuring that millions can be manufactured quickly and inexpensively enough to earn a significant profit?

The answers, almost every time, were found outside the United States. Though components differ between versions, all iPhones contain hundreds of parts, an estimated 90 percent of which are manufactured abroad. Advanced semiconductors have come from Germany and Taiwan, memory from Korea and Japan, display panels and circuitry from Korea and Taiwan, chipsets from Europe and rare metals from Africa and Asia. And all of it is put together in China.

In its early days, Apple usually didn’t look beyond its own backyard for manufacturing solutions. A few years after Apple began building the Macintosh in 1983, for instance, Mr. Jobs bragged that it was “a machine that is made in America.” In 1990, while Mr. Jobs was running NeXT, which was eventually bought by Apple, the executive told a reporter that “I’m as proud of the factory as I am of the computer.” As late as 2002, top Apple executives occasionally drove two hours northeast of their headquarters to visit the company’s iMac plant in Elk Grove, Calif.

But by 2004, Apple had largely turned to foreign manufacturing.

Guiding that decision was Apple’s operations expert, Timothy D. Cook, who replaced Mr. Jobs as chief executive last August, six weeks before Mr. Jobs’s death. Most other American electronics companies had already gone abroad, and Apple, which at the time was struggling, felt it had to grasp every advantage.

In part, Asia was attractive because the semiskilled workers there were cheaper. But that wasn’t driving Apple. For technology companies, the cost of labor is minimal compared with the expense of buying parts and managing supply chains that bring together components and services from hundreds of companies.

For Mr. Cook, the focus on Asia “came down to two things,” said one former high-ranking Apple executive. Factories in Asia “can scale up and down faster” and “Asian supply chains have surpassed what’s in the U.S. ” The result is that “we can’t compete at this point,” the executive said.

The impact of such advantages became obvious as soon as Mr. Jobs demanded glass screens in 2007.

For years, cellphone makers had avoided using glass because it required precision in cutting and grinding that was extremely difficult to achieve. Apple had already selected an American company, Corning Inc., to manufacture large panes of strengthened glass. But figuring out how to cut those panes into millions of iPhone screens required finding an empty cutting plant, hundreds of pieces of glass to use in experiments and an army of midlevel engineers. It would cost a fortune simply to prepare.

Then a bid for the work arrived from a Chinese factory.

When an Apple team visited, the Chinese plant’s owners were already constructing a new wing. “This is in case you give us the contract,” the manager said, according to a former Apple executive. The Chinese government had agreed to underwrite costs for numerous industries, and those subsidies had trickled down to the glass-cutting factory. It had a warehouse filled with glass samples available to Apple, free of charge. The owners made engineers available at almost no cost. They had built on-site dormitories so employees would be available 24 hours a day.

The Chinese plant got the job.

“The entire supply chain is in China now,” said another former high-ranking Apple executive. “You need a thousand rubber gaskets? That’s the factory next door. You need a million screws? That factory is a block away. You need that screw made a little bit different? It will take three hours.”

In Foxconn City

Inside Foxconn City

An eight-hour drive from that glass factory is a complex, known informally as Foxconn City, where the iPhone is assembled. To Apple executives, Foxconn City was further evidence that China could deliver workers — and diligence — that outpaced their American counterparts.

That’s because nothing like Foxconn City

exists in the United States.

The facility has 230,000 employees, many working six days a week, often spending up to 12 hours a day at the plant. Over a quarter of Foxconn’s work force lives in company barracks and many workers earn less than $17 a day. When one Apple executive arrived during a shift change, his car was stuck in a river of employees streaming past. “The scale is unimaginable,” he said.

Foxconn employs nearly 300 guards to direct foot traffic so workers are not crushed in doorway bottlenecks. The facility’s central kitchen cooks an average of three tons of pork and 13 tons of rice a day. While factories are spotless, the air inside nearby teahouses is hazy with the smoke and stench of cigarettes.

Foxconn Technology has dozens of facilities in Asia and Eastern Europe, and in Mexico and Brazil, and it assembles an estimated 40 percent of the world’s consumer electronics for customers like Amazon, Dell, Hewlett-Packard, Motorola, Nintendo, Nokia, Samsung and Sony.

“They could hire 3,000 people overnight,” said Jennifer Rigoni, who was Apple’s worldwide supply demand manager until 2010, but declined to discuss specifics of her work. “What U.S. plant can find 3,000 people overnight and convince them to live in dorms?”

In mid-2007, after a month of experimentation, Apple’s engineers finally perfected a method for cutting strengthened glass so it could be used in the iPhone’s screen. The first truckloads of cut glass arrived at Foxconn City in the dead of night, according to the former Apple executive. That’s when managers woke thousands of workers, who crawled into their uniforms — white and black shirts for men, red for women — and quickly lined up to assemble, by hand, the phones. Within three months, Apple had sold one million iPhones. Since then, Foxconn has assembled over 200 million more.

Foxconn, in statements, declined to speak about specific clients.

“Any worker recruited by our firm is covered by a clear contract outlining terms and conditions and by Chinese government law that protects their rights,” the company wrote. Foxconn “takes our responsibility to our employees very seriously and we work hard to give our more than one million employees a safe and positive environment.”

The company disputed some details of the former Apple executive’s account, and wrote that a midnight shift, such as the one described, was impossible “because we have strict regulations regarding the working hours of our employees based on their designated shifts, and every employee has computerized timecards that would bar them from working at any facility at a time outside of their approved shift.” The company said that all shifts began at either 7 a.m. or 7 p.m., and that employees receive at least 12 hours’ notice of any schedule changes.

Foxconn employees, in interviews, have challenged those assertions.

Another critical advantage for Apple was that China provided engineers at a scale the United States could not match. Apple’s executives had estimated that about 8,700 industrial engineers were needed to oversee and guide the 200,000 assembly-line workers eventually involved in manufacturing iPhones. The company’s analysts had forecast it would take as long as nine months to find that many qualified engineers in the United States.

In China, it took 15 days.

Companies like Apple “say the challenge in setting up U.S. plants is finding a technical work force,” said Martin Schmidt, associate provost at the Massachusetts Institute of Technology. In particular, companies say they need engineers with more than high school, but not necessarily a bachelor’s degree. Americans at that skill level are hard to find, executives contend. “They’re good jobs, but the country doesn’t have enough to feed the demand,” Mr. Schmidt said.

Some aspects of the iPhone are uniquely American. The device’s software, for instance, and its innovative marketing campaigns were largely created in the United States.Apple recently built a $500 million data center in North Carolina. Crucial semiconductors inside the iPhone 4 and 4S are manufactured in an Austin, Tex., factory by Samsung, of South Korea.

But even those facilities are not enormous sources of jobs. Apple’s North Carolina center, for instance, has only 100 full-time employees. The Samsung plant has an estimated 2,400 workers.

“If you scale up from selling one million phones to 30 million phones, you don’t really need more programmers,” said Jean-Louis Gassée, who oversaw product development and marketing for Apple until he left in 1990. “All these new companies — Facebook, Google, Twitter — benefit from this. They grow, but they don’t really need to hire much.”

It is hard to estimate how much more it would cost to build iPhones in the United States. However, various academics and manufacturing analysts estimate that because labor is such a small part of technology manufacturing, paying American wages would add up to $65 to each iPhone’s expense. Since Apple’s profits are often hundreds of dollars per phone, building domestically, in theory, would still give the company a healthy reward.

But such calculations are, in many respects, meaningless because building the iPhone in the United States would demand much more than hiring Americans — it would require transforming the national and global economies. Apple executives believe there simply aren’t enough American workers with the skills the company needs or factories with sufficient speed and flexibility. Other companies that work with Apple, like Corning, also say they must go abroad.

Manufacturing glass for the iPhone revived a Corning factory in Kentucky, and today, much of the glass in iPhones is still made there. After the iPhone became a success, Corning received a flood of orders from other companies hoping to imitate Apple’s designs. Its strengthened glass sales have grown to more than $700 million a year, and it has hired or continued employing about 1,000 Americans to support the emerging market.

But as that market has expanded, the bulk of Corning’s strengthened glass manufacturing has occurred at plants in Japan and Taiwan.

“Our customers are in Taiwan, Korea, Japan and China,” said James B. Flaws, Corning’s vice chairman and chief financial officer. “We could make the glass here, and then ship it by boat, but that takes 35 days. Or, we could ship it by air, but that’s 10 times as expensive. So we build our glass factories next door to assembly factories, and those are overseas.”

Corning was founded in America 161 years ago and its headquarters are still in upstate New York. Theoretically, the company could manufacture all its glass domestically. But it would “require a total overhaul in how the industry is structured,” Mr. Flaws said. “The consumer electronics business has become an Asian business. As an American, I worry about that, but there’s nothing I can do to stop it. Asia has become what the U.S. was for the last 40 years.”

Middle-Class Jobs Fade

The first time Eric Saragoza stepped into Apple’s manufacturing plant in Elk Grove, Calif., he felt as if he were entering an engineering wonderland.

It was 1995, and the facility near Sacramento employed more than 1,500 workers. It was a kaleidoscope of robotic arms, conveyor belts ferrying circuit boards and, eventually, candy-colored iMacs in various stages of assembly. Mr. Saragoza, an engineer, quickly moved up the plant’s ranks and joined an elite diagnostic team. His salary climbed to $50,000. He and his wife had three children. They bought a home with a pool.

“It felt like, finally, school was paying off,” he said. “I knew the world needed people who can build things.”

At the same time, however, the electronics industry was changing, and Apple — with products that were declining in popularity — was struggling to remake itself. One focus was improving manufacturing. A few years after Mr. Saragoza started his job, his bosses explained how the California plant stacked up against overseas factories: the cost, excluding the materials, of building a $1,500 computer in Elk Grove was $22 a machine. In Singapore, it was $6. In Taiwan, $4.85. Wages weren’t the major reason for the disparities. Rather it was costs like inventory and how long it took workers to finish a task.

“We were told we would have to do 12-hour days, and come in on Saturdays,

” Mr. Saragoza said. “I had a family. I wanted to see my kids play soccer.”

Modernization has always caused

some kinds of jobs to change or disappear.

As the American economy transitioned from agriculture to manufacturing and then to other industries, farmers became steelworkers, and then salesmen and middle managers. These shifts have carried many economic benefits, and in general, with each progression, even unskilled workers received better wages and greater chances at upward mobility.

But in the last two decades, something more fundamental has changed, economists say. Midwage jobs started disappearing. Particularly among Americans without college degrees, today’s new jobs are disproportionately in service occupations — at restaurants or call centers, or as hospital attendants or temporary workers — that offer fewer opportunities for reaching the middle class.

Even Mr. Saragoza, with his college degree, was vulnerable to these trends. First, some of Elk Grove’s routine tasks were sent overseas. Mr. Saragoza didn’t mind. Then the robotics that made Apple a futuristic playground allowed executives to replace workers with machines. Some diagnostic engineering went to Singapore. Middle managers who oversaw the plant’s inventory were laid off because, suddenly, a few people with Internet connections were all that were needed.

Mr. Saragoza was too expensive for an unskilled position. He was also insufficiently credentialed for upper management. He was called into a small office in 2002 after a night shift, laid off and then escorted from the plant. He taught high school for a while, and then tried a return to technology. But Apple, which had helped anoint the region as “Silicon Valley North,” had by then converted much of the Elk Grove plant into an AppleCare call center, where new employees often earn $12 an hour.

There were employment prospects in Silicon Valley, but none of them panned out. “What they really want are 30-year-olds without children,” said Mr. Saragoza, who today is 48, and whose family now includes five of his own.

After a few months of looking for work, he started feeling desperate. Even teaching jobs had dried up. So he took a position with an electronics temp agency that had been hired by Apple to check returned iPhones and iPads before they were sent back to customers. Every day, Mr. Saragoza would drive to the building where he had once worked as an engineer, and for $10 an hour with no benefits, wipe thousands of glass screens and test audio ports by plugging in headphones.

Paydays for Apple

As Apple’s overseas operations and sales have expanded, its top employees have thrived. Last fiscal year, Apple’s revenue topped $108 billion, a sum larger than the combined state budgets of Michigan, New Jersey and Massachusetts. Since 2005, when the company’s stock split, share prices have risen from about $45 to more than $427.

Some of that wealth has gone to shareholders. Apple is among the most widely held stocks, and the rising share price has benefited millions of individual investors, 401(k)’s and pension plans. The bounty has also enriched Apple workers. Last fiscal year, in addition to their salaries, Apple’s employees and directors received stock worth $2 billion and exercised or vested stock and options worth an added $1.4 billion.

The biggest rewards, however, have often gone to Apple’s top employees. Mr. Cook, Apple’s chief, last year received stock grants — which vest over a 10-year period — that, at today’s share price, would be worth $427 million, and his salary was raised to $1.4 million. In 2010, Mr. Cook’s compensation package was valued at $59 million, according to Apple’s security filings.

A person close to Apple argued that the compensation received by Apple’s employees was fair, in part because the company had brought so much value to the nation and world. As the company has grown, it has expanded its domestic work force, including manufacturing jobs. Last year, Apple’s American work force grew by 8,000 people.

While other companies have sent call centers abroad, Apple has kept its centers in the United States. One source estimated that sales of Apple’s products have caused other companies to hire tens of thousands of Americans. FedEx and United Parcel Service, for instance, both say they have created American jobs because of the volume of Apple’s shipments, though neither would provide specific figures without permission from Apple, which the company declined to provide.

“We shouldn’t be criticized for using Chinese workers,” a current Apple executive said. “The U.S. has stopped producing people with the skills we need.”

What’s more, Apple sources say the company has created plenty of good American jobs inside its retail stores and among entrepreneurs selling iPhone and iPad applications.

After two months of testing iPads, Mr. Saragoza quit. The pay was so low that he was better off, he figured, spending those hours applying for other jobs. On a recent October evening, while Mr. Saragoza sat at his MacBook and submitted another round of résumés online, halfway around the world a woman arrived at her office. The worker, Lina Lin, is a project manager in Shenzhen, China, at PCH International, which contracts with Apple and other electronics companies to coordinate production of accessories, like the cases that protect the iPad’s glass screens. She is not an Apple employee. But Mrs. Lin is integral to Apple’s ability to deliver its products.

Mrs. Lin earns a bit less than what Mr. Saragoza was paid by Apple. She speaks fluent English, learned from watching television and in a Chinese university. She and her husband put a quarter of their salaries in the bank every month. They live in a 1,080-square-foot apartment, which they share with their in-laws and son.

“There are lots of jobs,” Mrs. Lin said. “Especially in Shenzhen.”

Innovation’s Losers

Toward the end of Mr. Obama’s dinner last year with Mr. Jobs and other Silicon Valley executives, as everyone stood to leave, a crowd of photo seekers formed around the president. A slightly smaller scrum gathered around Mr. Jobs. Rumors had spread that his illness had worsened, and some hoped for a photograph with him, perhaps for the last time.

Eventually, the orbits of the men overlapped. “I’m not worried about the country’s long-term future,” Mr. Jobs told Mr. Obama, according to one observer. “This country is insanely great. What I’m worried about is that we don’t talk enough about solutions.”

At dinner, for instance, the executives had suggested that the government should reform visa programs to help companies hire foreign engineers. Some had urged the president to give companies a “tax holiday” so they could bring back overseas profits which, they argued, would be used to create work. Mr. Jobs even suggested it might be possible, someday, to locate some of Apple’s skilled manufacturing in the United States if the government helped train more American engineers.

Economists debate the usefulness of those and other efforts, and note that a struggling economy is sometimes transformed by unexpected developments. The last time analysts wrung their hands about prolonged American unemployment, for instance, in the early 1980s, the Internet hardly existed. Few at the time would have guessed that a degree in graphic design was rapidly becoming a smart bet, while studying telephone repair a dead end.

What remains unknown, however, is whether the United States will be able to leverage tomorrow’s innovations into millions of jobs.

In the last decade, technological leaps in solar and wind energy, semiconductor fabrication and display technologies have created thousands of jobs. But while many of those industries started in America, much of the employment has occurred abroad. Companies have closed major facilities in the United States to reopen in China. By way of explanation, executives say they are competing with Apple for shareholders. If they cannot rival Apple’s growth and profit margins, they won’t survive.

“New middle-class jobs will eventually emerge,” said Lawrence Katz, a Harvard economist. “Butwill someone in his 40s have the skills for them?Or will he be bypassed for a new graduate and never find his way back into the middle class?”

The pace of innovation, say executives from a variety of industries, has been quickened by businessmen like Mr. Jobs. G.M. went as long as half a decade between major automobile redesigns. Apple, by comparison, has released five iPhones in four years, doubling the devices’ speed and memory while dropping the price that some consumers pay.

Before Mr. Obama and Mr. Jobs said goodbye, the Apple executive pulled an iPhone from his pocket to show off a new application — a driving game — with incredibly detailed graphics. The device reflected the soft glow of the room’s lights. The other executives, whose combined worth exceeded $69 billion, jostled for position to glance over his shoulder. The game, everyone agreed, was wonderful.

An article on Sunday about the reasons iPhones are largely produced overseas omitted a passage immediately after the second continuation, from Page A22 to Page A23, in one edition. The full passage should have read: “Another critical advantage for Apple was that China provided engineers at a scale the United States could not match. Apple’s executives had estimated that about 8,700 industrial engineers were needed to oversee and guide the 200,000 assembly-line workers eventually involved in manufacturing iPhones. The company’s analysts had forecast it would take as long as nine months to find that many qualified engineers in the United States.”

A version of this article appeared in print on January 22, 2012, on page A1 of the New York edition with the headline: How U.S. Lost Out On iPhone Work.

Can Manufacturing Jobs Come Back?

What We Should Learn From Apple and Foxconn

Apple aficionados suffered a blow a couple of weeks ago. All of those beautiful products, it turns out, are the product of an industrial complex that is nothing if not one step removed from slave labor.

But of course there is nothing new here. Walmart has long prospered as a company that found ways to drive down the cost of stuff that Americans want. And China has long been the place where companies to go to drive down cost.

For several decades, dating back to the post World War II years, relatively unfettered access to the American consumer has been the means for pulling Asian workers out of deep poverty. Japan emerged as an industrial colossus under the tutelage of Edward Deming. The Asian tigers came next. Vietnam and Sri Lanka have nibbled around the edges, while China embraced the export-led economic development model under Deng Xiaoping.

While Apple users have been beating their breasts over the revelations of labor conditions and suicides that sullied their glass screens, the truth is that Foxconn is just the most recent incarnation of outsourced manufacturing plants -- textiles and Nike shoes come to mind -- where working conditions are below American standards.

While the Apple-Foxconn story has focused attention on the plight of workers living in dormitories who can be summoned to their work stations in a manner of minutes, the story has also become part of the debate about whether the U.S. should seek to bring back manufacturing jobs or should instead accept the conclusions reached by some economists that not only does America not need manufacturing jobs, but it can no longer expect to have them.

Nobel laureate Joseph Stiglitz argued recently that our difficulties recovering from the 2008 collapse are a function of our migration from a manufacturing to a service economy.

While this migration has been ongoing for years,

Stiglitz has concluded that the trend is irreversible.

His historical metaphor is the Great Depression, which he suggests was prolonged because the nation was in the midst of a permanent transition from an agrarian economy to manufacturing, as a revolution in farm productivity required a large segment of the labor force to leave the farm.

The problem with this deterministic conclusion that America can no longer support a manufacturing sector is that it seems to ignore the facts surrounding the decline that we have experienced. In his recent article, Stiglitz notes that at the beginning of the Great Depression, one-fifth of all Americans worked on farms, while today "2 percent of Americans produce more food than we can consume." This is a stark contrast with trends in the U.S. manufacturing sector. Manufacturing employment, which approximated 18.7 million in 1980 has declined by 37%, or 7 million jobs, in the ensuing years. However, the increase in labor productivity over that timeframe -- 8% in real terms -- explains little of the decline. Unlike the comparison with agriculture, where we continue to produce more than we consume, most of the decline in manufacturing jobs correlated with the steady increase in our imports of manufactured goods and our steadily growing merchandise trade deficit.

The chart below, based on data from the Bureau of Economic Analysis, illustrates the growth in personal spending on manufactured goods in the United States over the past three decades, and the parallel growth in the share of that spending that is on imported goods. These changes happened over a fifty-year period. Going back to the 1960s, we imported about 10% of the stuff we buy. By the end of the 1970s -- a period of significant declines in core industries such as steel and automobiles -- this number grew to over 25%. As illustrated here, the trend continued to the current day, and we now import around 60% of the stuff we buy.

Over the same timeframe, as illustrated below, the merchandise trade deficit -- the value of goods we import less the value we export -- exploded. By the time of the 2008 collapse, the trade deficit in manufactured goods translated into 3.5 million "lost" jobs, if one applies a constant metric of labor productivity to the value of that trade deficit.

This is where Stiglitz' comparison with the Depression era migration from an agrarian economy breaks down. As he duly notes, the economics of food production has changed, and today America's agricultural sector feeds the nation and sustains a healthy trade surplus as well, with a far smaller share of the American workforce. In contrast, the decline in manufacturing jobs reflects the opening of world labor markets. Unlike agriculture, we are not self-supporting in manufactured goods, we have simply decided to buy abroad what we once made at home.

This shift has been embraced across our society.

For private industry, outsourcing to Asia has been driven by profit-maximizing behavior and the pressures of surviving in competitive markets. For consumers, innovations in retail from Walmart to Amazon.com have fed the urge to get the greatest value for the lowest price. And for politicians -- Democrats and Republicans alike -- embracing globalization was part of the post-Cold War tradeoff: We open our markets, and the world competes economically and reduces the threat of nuclear conflict.

The notion that American industry, consumers and politicians were co-conspiring in the destruction of the American working class was a discussion relegated to the margins of public discourse, championed among others by union leaders, Dennis Kucinich on the left, Pat Buchanan on the right and Ross Perot, while largely dismissed by the mainstream media.

While Apple has been pilloried from National Public Radio to the New York Times for its effective support of a slave economy, most electronics consumer goods are now imported. The irony of the Apple story is that the Chinese labor content may well not be the cost driver that we presume it to be. As in many other industries, the costs of what is in the box can be a relatively small share of total costs, when product development, marketing, packaging and profits are taken into account.

This, of course, is why China is not particularly happy with their role in the Apple supply chain. When the profits of Apple products are divided up, far more of it flows to Cupertino than to Chengdu. And that is the reality of modern manufacturing. Based on National Science Foundation data on the value chain of the iPad, for example, final assembly in China captures only $8 of the $424 wholesale price. The U.S. captures $150 for product design and marketing, as well as $12 for manufactured components, while other nations, including Japan, Korea and the Taiwan, capture $76 for other manufactured components.

If anything, the NSF data -- and China's chagrine -- reflect a world in which the economic returns to design and innovation far exceed the benefits that accrue to the line workers who manufacture the product. This is one part of the phenomenon of growing inequality, and would seem to mitigate the complaint that is often made that America no longer "makes things." We may not make things, but we think them up and as the NSF data suggests, to the designers go the spoils.

Yet there is no fundamental reason that the decline in manufacturing jobs in America should be deemed inevitable and permanent. For all the talk about the number of engineers in China, the fundamental issue remains price. As a friend who is a consulting engineer who works with Apple in China has commented, "Yeah, they have engineers, but the driver is cost, cost, cost. And the labor quality is awful. We lose a lot of product and have to stay on top of everything, but at $27 per day, you can afford a lot of management."

This argument conflicts with Stiglitz deterministic thesis. Just as manufacturing jobs left the United States, they can come back as economic conditions change. As wage rates rise in other countries, one competitive advantage of outsourcing shrinks. And if nations -- from China to Taiwan -- migrate away from their practice of pegging their currencies to the dollar, foreign currency risk exposure will offset some of the cost advantages of outsourcing. And today, as newly industrialized nations like Brazil have seen their own manufacturing sectors ravaged by mercantilist competitors, there is a growing understanding for the need for order and fair rules to govern the forces of globalization.

The Apple-Foxconn affair spooked consumers of Apple products -- at least for a news cycle or two. Like Claude Rains in Rick's Cabaret, we were shocked to confront the reality of labor conditions in China. But the story was less about China than about us. That Foxconn could put eight thousand workers to work within thirty minutes to accommodate a last minute design change by Steve Jobs was not -- as Jobs suggested in a meeting with President Obama -- an argument for why those jobs could never come back to America, but rather it was illustrative of the astonishing narcissism of the Apple world.

It is true, no American factory could deliver for Apple as Foxconn did. But on the other hand, there really was no need to. That story was less about what Foxconn could deliver than what Foxconn's customer had the audacity to demand.

This story raised the question of whether we care where our products are made. The answer is unclear, however many Americans have long cared about purchasing cars made in this country, and Clint Eastwood's Super Bowl ad has raised awareness of this question. What is clear is that if Americans care about where their products are made, companies will care. Therefore, even as the president promoted tax credits for insourcing -- the new word for bringing those jobs back -- perhaps another step would be to build on the power of choice. Perhaps not all Americans care where their products are made, but many certainly do. But even if one does care, it tends to be difficult to find out.

Perhaps a simple step would be for companies to provide that information to consumers. Even if it was voluntary labeling, knowing who chose to provide information to their customers would tell many of us all we need to know. Then we could find out whether the Apple story really changed anything, and whether consumers might be willing to take more into account that the last dollar saved if it enables us to sustain a diversified economy into the future.

For five centuries, our continent has been able to invent the ideas and the goods that have transformed the world, yet it seems to have lost the secret of their manufacture. It no longer knows if it is capable of inventing the world of tomorrow; it doesn’t even know if it has a common future any more.

Of the two terms of Schumpeter’s formula summarizing capitalism, «creative destruction», we have forgotten the former, that is to say, creation, leaving us only with the latter, destruction. For many, unemployment has become the norm. The hope of becoming a part of society through work has evaporated. Extreme ideologies bloom, though one sole look at the world would be enough to demonstrate the absurdity of all of them. Our societies thought they had built a balance in which every successive generation could legitimately hope that its progeny would have a better life. Today they are convinced that we can no longer keep this promise. Our systems of social negotiations have broken down, and our systems of social protection are threatened. Belief in progress has faded. Many perceive technical progress as a danger, economic progress as a lie, social progress as a mirage, democratic progress as an illusion.

We are living through a turning point, in great confusion. Nothing of what seemed obvious yesterday is evident today. Nor are there any signs to tell us what future certainties will be. The great points of reference — the Nation, the State, Morality — seem to have disappeared. The great hopes of tomorrow remain invisible.

We must struggle against this doubt, so devastating for a Europe whose history was built, precisely, upon progress.

When a majority of the population comes to the point of thinking that tomorrow may well be worse than today, the only possible strategy it can see becomes that of preserving what exists. Everyone wants things to remain frozen as they are as long as possible, in order to preserve his own interests, which leads to hampering, preventing, all change. Fear is the greatest ally of conservatives. It feeds the rise of egoism: the social egoism of those who can or believe they can succeed in spite of others or against others; ethnic egoism that rejects the other, whom they consider responsible for all ills; and the national egoism of each individual country persuaded it should prevail over its partners.

So how can one approach tomorrow in a new way?

We must invent a new world.

We must recover the meaning of progress, not progress as an automatic reflex or an empty word, but as an act of will. We must return to the idea that it is possible to act in order to influence things. Never become resigned, never submit, never retreat. We must not see the market as a more effective means of coordinating individual actions. No society can organize itself simply by virtue of the market. Thuswe must be wary of the liberal illusion of a society that has no need to think out its future or define its regulations. On the contrary, it is up to politics to reinvent itself, to define new rules and new institutions.

Many believe that in a so-called global and liberal economy, governments should have no power. They are mistaken. The crisis andreactions to the crisis demonstrate that this is a fallacy, that there exist good policies and bad ones, that there exist good and bad regulations.

We must act in three areas

:

Production, in other words, growth. We must tell ourselves that without growth, there can be no progress and no reduction of inequality.

Solidarity, which is a method as much as it is a necessity. There is no progress if it does not profit all and if it is not accepted by all. Solidarity in Europe is not only a part of our glorious past, it is the key to our tomorrow.

Public action, for the genius of Europe is first of all that of a collective project and a common destiny.

Production and growth, to begin with, to reach full employment.

That may seem like a utopia, but actually it is not.

The society of full employment we should strive for will not be that of the 60s. It will not be a society without unemployment. But it will be, or it should be, a society in which unemployment is only short-term. A mobile society in which every wage-earner can tell himself he will advance. The contrary of a society where everyone is pigeon-holed to remain in the same profession or at the same rank or level for decades. A society where all of us are perpetually learning or relearning. This implies a radical change in our relation to work and to our crafts and professions.

We must renew our solidarity.

It is the distinctive feature of Europe and of European society. Those who carry the banner of individualism refuse to understand that, in the social contract, we Europeans have a concept much richer than theirs, founded upon the existence of a common good that cannot be reduced to the sum of individual interests. We should be proud of what we have built: adequate medical care available to all, an end to poverty for the aged, solidarity towards those who do not have jobs. An economy more vulnerable to technical change and the appearance of new competition is also harsher. So it demands that those who miss out because of progress can count on the solidarity of those who are benefiting from it.

Finally, we must reinvent public service, public action, that is to say, the role of the State.

What counts is not the amount of taxes paid, it is the comparison between taxes and the quality of public goods and services offered in exchange: education, training, security, roads, railroads, communications infrastructure. It is the State’s capacity to favor the creation of wealth, to ensure its just and efficient redistribution, to reduce inequality.

The key principle upon which this project must depend is that of equality. The rise of unprecedented inequality is characteristic of the present day. It is something new, and it has been with us over a sustained period. To borrow Necker’s phrase, equality was the very idea of the Revolution. Yet today, the force that is affecting and transforming the world is the development of inequality. And it hasn’t slowed down for decades. Inequality between countries, between regions of the world, between social classes, between generations, etc. The result is the dissolution of the feeling of belonging to a common world. A world henceforth undermined by social inequality, the secession of the wealthy, and a revolt of those who feel, conversely, forgotten, despised, rejected or abandoned. And whose sole weapon is their discontent and the power of their indignation.

We must revive what was once the revolutionary plan: equality, in other words, a manner of building society, of producing together, of living together and of breathing life anew into the common good.

As Pierre Rosanvallon put it, it is a question

of refounding a society of equals.

A society in which everyone possesses the same rights, in which each of us is recognized and respected as being as important as the others. A society that allows each one to change his life.

We must also take into consideration the political crisis we are currently experiencing. It is marked not only by political disengagement, abstention, and the rise of extreme ideologies, but also by an institutional crisis. To be more precise, a crisis of the political model. The crisis of the political model is the extreme concentration of power, and in particular the extreme concentration of executive power in the hands of one man, the President of the Republic. The real power of a sole individual versus the actual power of all. It is marked as well by a crisis of decision and a weakened legitimacy of institutions, government, ministers and other authorities.

What is to be done? To undertake a program of institutional reform comparable in its breadth to that of 1958, at the establishment of the 5th Republic. With two main objectives.

To make political decisions more effective and, with this in mind, introduce a dose of proportional representation in elections in order to ensure the best representation possible; reduce by half the number of parliamentary representatives, and outlaw cumulative office; downsize the number of ministers to fifteen, each concentrating on lofty missions of State and thus avoiding the dispersion of public actions, thereby ridding ourselves of that French specificity consisting of incessantly inventing new ministries whose missions are vague but whose uselessness is certain.

Take up the challenge of democratic representation. The historic principle of representation, the idea according to which the people exercise real power through the intermediary of their elected representatives, can only function if we recognize that two principles have proven largely fictitious. The first is the view that a relative or absolute majority represents the opinion of all. The second is that the ballot represents the opinion of the citizen, whereas the rich diversity of an opinion cannot be reduced to the choice of one person at a given time. The result is a legitimate feeling of not being represented. The demand for better representation must be met with more participation, the submission of governments to intensified surveillance, to more frequent rendering of accounts, to new forms of inspection. It is not possible to keep an eye on every decision, but everyone must be entitled to participate in the collective power through a system of evaluation.

This is the price of the construction of a more just and meaningful society.

Toc Toc... c'è qualcuno che è stato ad INGA ?

Inga Hydroelectric Power Complex in the Democratic Republic of the CongoThe region of the Congo River basin is endowed with an extraordinary richness of natural resources. One of those natural resources is hydroelectric power potential. The Congo River flows around a basin which is several hundred meters above sea level. Near the Atlantic it falls about three hundred meters over a stretch of river only tens of kilometers in length. Here the Congo River, second only to the Amazon River in volume of flow, pours downward to the ocean. This is a hydroelectric site second to none in the world. There are two hydroelectric sites there already but a proposed dam would generate enough electricity to meet the needs of all of Africa. It is however the tradegy of the region that for all its natural resources their development falls miserably short of the potential. When independence was in the offing for the Belgian Congo around 1960 the Belgium government held out the possibility of the building of the Inga Dam on the Congo near the city of Matadi as an inducement for the Congolese to postpone independence. The public announcements said that the Inga Dam could create an industrial complex on the Congo comparable to the Ruhr Valley in Germany. The attempted enticement did not work and the Congolese opted for immediate independence. Nevertheless in 1972 the Inga Dam was built. It provided electrical power for the mining area of Katanga (Shaba). That hydroelectric installation had a capacity of 351 megawatts (MW). Ten years later in 1982 a much larger hydroelectric installation was built with a capacity of 1424 MW. However due to mismanagement and bad economic policies on the part of the president, Sese Seko Mobutu, the two Inga Dams operated at only a small fraction of their capacities. Mobutu's policy of Zaireization resulted the replacement of the foreign technicians with domestic personel without adequate training and skills. As a consequence there were periods of time in which the hydroelectic installations were not functioning at all. The first two hydroelectric installations on the Congo are now known as Inga I and Inga II . There is now a proposed Inga III which will have a generating capacity of 3,500 MW. This installation will cost about $5 billion and the target date for the beginning of construction is 2012. The World Bank will be a major lender for this installation. Beyond Inga III there are proposals for a Grand Inga project that will have a generating capacity of 39,000 MW, more than twice that of the Three Gorges Dam in China. The Grand Inga project would cost something in the neighborhood of $50 billion. This is an enormous amount of money but the Grand Inga could supply electricity to much of Africa. Already with the present installations there was a project to build a power line to transmit power to Egypt. The agreement was negotiated but the project was not implemented.

Tuesday, 27 June 2017

I think that genius, where we're talking about music or any other art form, is by definition a highly unusual congregation of abilities and temperaments. . It's a set of experiences that is unique, and an emotional intensity perhaps, and a discipline. I don't think there's any evidence that Bach was manic or depressed or anywhere near manic or anywhere near depressed, and yet he obviously conjured these great emotions. So it certainly is not the case that you need to have this kind of emotional experience. It's more that a disproportionate number of people do.

Dr. Kay Redfield Jamison

"Neither a lofty degree of intelligence nor imagination nor both together
go to the making of genius. Love, love, love, that is the soul of genius."

The word genius derives from Greek and Latin words meaning "to beget," "to beborn," or "to come into being" (it is closely related to the word genesis). It is also linked to the word genial, which means, among other things, "festive," "conducive to growth," "enlivening," and "jovial." Combining these two sets of definitions comes closest to the meaning of the word genius "giving birth to one's joy "

What Kind of Genius Are You?... sichiedeDaniel Pink

quickanddramatic,

or

carefulandquiet

"I have often asked myself whether, given the choice, I would chose to have manic-depressive illness. ... Strangely enough, I think I would chose to have it. It's complicated. Depression is awful beyond words or sounds or images ... So why would I want anything to do with this illness? Because I honestly believe that as a result of it I have felt more things, more deeply; had more experiences, more intensely; ... worn death 'as close as dungarees', appreciated it - and life - more; seen the finest and most terrible in people ... But, normal or manic, I have run faster, thought faster, and loved faster than most I know. And I think much of this is related to my illness - the intensity it gives to things."

Kay Redfield Jamison

"Touched With Genius"

http://www.menstuff.org/columns/overboard/jamison.html

“Why do people think artists are special? It’s just another job."

Andy Warhol

Psychological studies suggest that artists are emotional (Barron, 1972), sensitive, independent,impulsive, and socially aloof (Csikszentmihalyi & Getzels, 1973; Walker, Koestner, & Hum, 1995), introverted (Storr, 1988), and nonconforming (Barton & Cattell, 1972). But how pervasive are these traits among successful artists - the personalities who actually shape the domain of art? Is there really such a thing as a timeless, constitutional artistic personality?

Psychology 175: GENIUS,Creativity, andLeadership

A General Education Course in the Social Sciences with Writing ExperienceDean Keith Simonton, Ph.D., Distinguished Professor of Psychology

During the course of this class, are examined genius, creativity, and leadership from a great diversity of methodological techniques and theoretical perspectives.

Which of these approaches seem to be the most enlightening, which the least, and why? To what extent are some methodologies tied to certain theories whereas other methods seem relatively theory free? Which methods and theories are most suitable for studying just creativity? Which work best for investigating leadership? How possible is it for a psychology of genius to emerge that imposes one method and theory on all pertinent phenomena? Are there aspects of genius that are overlooked by all current methodological and theoretical frameworks?

""Genes dictate. Genes instruct. Genes determine. For more than a century, this has been the widely accepted explanation of how each of us becomes us. ... Such was the unequivocal interpretation of early-twentieth- century geneticists..."

"The popular conception of the gene as a simple causal agent is not valid," declare geneticists Eva Jablonka and Marion Lamb. "The gene cannot be seen as an autonomous unit — as a particular stretch of DNA which always produces the same effect. Whether or not a length of DNA produces anything, what it produces, and where and when it produces

.

What Asperger's syndrome has done for us

By Megan Lane BBC News Online Magazine

Michelangelo might have had it. So, too, may Einstein, Socrates and Jane Austen. All are claimed to have had Asperger's syndrome, a form of autism. What is it about this developmental disorder that can lead to genius?

We will never know for sure if the genius of past greats may have been a symptom of a form of autism.

Informed speculation that Michelangelo might have had Asperger's syndrome is just that - the Renaissance artist was never diagnosed in his lifetime. Indeed, Asperger's was only identified as a separate condition in 1944, and not until the mid-90s that it was a clinical diagnosis.

Instead, two medical experts have drawn this conclusion from studying contemporary accounts of the artist's behaviour - his single-minded work routine, few friends and obsessional nature - and comparing it with traits displayed by adults who have been diagnosed today.

It's a theory which has been rubbished by art historians, but which has piqued the interest of Eileen Hopkins, of the National Autistic Society. The artist's meticulously observed figures and high work rate resonate with such a diagnosis.

WHAT IS AUTISM?

A complex, lifelong developmental disability

It involves a biological or organic defect in brain function

Autism (including Asperger's) is said to affect about 500,000 people in the UK today

"This reflects the positive side of this gene, that people with it can contribute in many ways. Being single-minded, it gives them the chance to focus on something which interests them. Their talents are not diluted by the everyday interactions that take up so much time for the rest of us."

The same posthumous diagnosis has been made of other historical figures, among them the scientists Charles Darwin, Isaac Newton, Albert Einstein and Marie Curie, the politician Eamon de Valera, the poet WB Yeats and Pop Art giant Andy Warhol.

Attention to detail

What is the link between this condition and creativity, be it in the arts or sciences?

Professor Michael Fitzgerald, of Dublin's Trinity College, one of the experts who posed the Michelangelo theory, says it makes people more creative.

"People with it are generally hyper-focused, very persistent workaholics who tend to see things from detail to global rather than looking at the bigger picture first and then working backwards, as most people do."

Einstein is credited with developing the theory of relativity

But Professor Simon Baron-Cohen, of Cambridge University, says it is more accurate to describe this creativity as "systemising" - a strong drive to analyse detail.

"This might be in mathematics, machines, natural phenomena or anatomy, to identify rules that govern a system and any variations in that system."

While those whose strength lies in rational analysis are by no means exclusively male, it is described as a male brain trait compared with the so-called female ability to empathise - a characteristic weak spot for those with Asperger's.

"The condition does tend to affect men more than women, especially among those who are high-functioning. Males outnumber females nine to one in this diagnosis," he says.

DEFINING TRAITS INCLUDE:

Find social situations confusing

Hard to make small talk

Good at picking up details and facts

Hard to work out what others think and feel

Can focus for very long periods

Source: Cambridge Lifespan Asperger Syndrome Service

Thus it is thought possible that some maths and physics experts, far from being bright but anti-social misfits, may actually have had Asperger's. One whom Mr Baron-Cohen has helped diagnose is the British mathematician Richard Borcherds, the 1998 winner of the Field's Medal - the Nobel Prize of the maths world.

The naturalist and TV presenter David Bellamy mentions in his autobiography that although undiagnosed, he believes he has a form of autism. And Microsoft boss Bill Gates' personality quirks have been compared to those of an autistic.

"This goes to show that people who get by without a diagnosis have found a niche where they can use their skills to make a contribution. This need not be dramatic - perhaps they are a very methodical worker, who understands the rules of their chosen profession," says Mr Baron-Cohen.

Characters with autistic traits

On a lighter note, fictional characters said to display characteristics of those with Asperger's include Mr Spock, Lisa Simpson, Mr Bean and Cliff from Cheers. And one of the school boys in Grange Hill, Martin Miller, has the condition and so has found himself in difficulty after taking a mate's advice on girls literally.

"Mr Spock is an extreme example of someone driven by logic and systemising, but who has no interest in the feelings of others," says Mr Baron-Cohen. "But he is very much a caricature."

,

What Asperger's syndrome has done for us

By Megan Lane BBC News Online Magazine

Michelangelo might have had it. So, too, may Einstein, Socrates and Jane Austen. All are claimed to have had Asperger's syndrome, a form of autism. What is it about this developmental disorder that can lead to genius?

We will never know for sure if the genius of past greats may have been a symptom of a form of autism.Informed speculation that Michelangelo might have had Asperger's syndrome is just that - the Renaissance artist was never diagnosed in his lifetime. Indeed, Asperger's was only identified as a separate condition in 1944, and not until the mid-90s that it was a clinical diagnosis.

Instead, two medical experts have drawn this conclusion from studying contemporary accounts of the artist's behaviour - his single-minded work routine, few friends and obsessional nature - and comparing it with traits displayed by adults who have been diagnosed today.

It's a theory which has been rubbished by art historians, but which has piqued the interest of Eileen Hopkins, of the National Autistic Society. The artist's meticulously observed figures and high work rate resonate with such a diagnosis.

WHAT IS AUTISM?

A complex, lifelong developmental disability

It involves a biological or organic defect in brain function

Autism (including Asperger's) is said to affect about 500,000 people in the UK today

"This reflects the positive side of this gene, that people with it can contribute in many ways. Being single-minded, it gives them the chance to focus on something which interests them. Their talents are not diluted by the everyday interactions that take up so much time for the rest of us."The same posthumous diagnosis has been made of other historical figures, among them the scientists Charles Darwin, Isaac Newton, Albert Einstein and Marie Curie, the politician Eamon de Valera, the poet WB Yeats and Pop Art giant Andy Warhol.

Attention to detail

What is the link between this condition and creativity, be it in the arts or sciences?

Professor Michael Fitzgerald, of Dublin's Trinity College, one of the experts who posed the Michelangelo theory, says it makes people more creative.

"People with it are generally hyper-focused, very persistent workaholics who tend to see things from detail to global rather than looking at the bigger picture first and then working backwards, as most people do."

Einstein is credited with developing the theory of relativity

But Professor Simon Baron-Cohen, of Cambridge University, says it is more accurate to describe this creativity as "systemising" - a strong drive to analyse detail."This might be in mathematics, machines, natural phenomena or anatomy, to identify rules that govern a system and any variations in that system."

While those whose strength lies in rational analysis are by no means exclusively male, it is described as a male brain trait compared with the so-called female ability to empathise - a characteristic weak spot for those with Asperger's.

"The condition does tend to affect men more than women, especially among those who are high-functioning. Males outnumber females nine to one in this diagnosis," he says.

DEFINING TRAITS INCLUDE:

Find social situations confusing

Hard to make small talk

Good at picking up details and facts

Hard to work out what others think and feel

Can focus for very long periods

Source: Cambridge Lifespan Asperger Syndrome Service

Thus it is thought possible that some maths and physics experts, far from being bright but anti-social misfits, may actually have had Asperger's. One whom Mr Baron-Cohen has helped diagnose is the British mathematician Richard Borcherds, the 1998 winner of the Field's Medal - the Nobel Prize of the maths world.The naturalist and TV presenter David Bellamy mentions in his autobiography that although undiagnosed, he believes he has a form of autism. And Microsoft boss Bill Gates' personality quirks have been compared to those of an autistic.

"This goes to show that people who get by without a diagnosis have found a niche where they can use their skills to make a contribution. This need not be dramatic - perhaps they are a very methodical worker, who understands the rules of their chosen profession," says Mr Baron-Cohen.

Characters with autistic traits

On a lighter note, fictional characters said to display characteristics of those with Asperger's include Mr Spock, Lisa Simpson, Mr Bean and Cliff from Cheers. And one of the school boys in Grange Hill, Martin Miller, has the condition and so has found himself in difficulty after taking a mate's advice on girls literally.

"Mr Spock is an extreme example of someone driven by logic and systemising, but who has no interest in the feelings of others," says Mr Baron-Cohen. "But he is very much a caricature."

NeuroFocus Uses Neuromarketing

To Hack Your Brain

to plumb your mind. Here's how it's done.

Photo by Gene Lee

AK. Pradeep knows what you likeand why you like it. Take the sleek, slick iPad. Ask Mac lovers why they adore their tablet and they'll say it's the convenience, the touch screen, the design, the versatility. But Apple aficionados don't just like their iPads; they're preprogammed to like them. It's in their subconscious--the curves, the way it feels in their hands, and in the hormones their brains secrete when they touch the screen. "When you move an icon on the iPad and it does what you thought it would do, you're surprised and delighted it actually happened," he says. "That surprise and delight turns into a dopamine squirt, and you don't even know why you liked it."

Pradeep is the founder and CEO of science-based consumer-research firm NeuroFocus, a Berkeley, California-based company wholly owned by Nielsen Holdings N.V. that claims to have the tools to tap into your brain (or, as Woody Allen called it, "my second favorite organ"). You might say Pradeep was born to plumb the depths of our minds. The "A.K." in his name stands for Anantha Krishnan, which translates as "unending consciousness"; Pradeep means "illumination." Fortunately, he doesn't refer to himself as Unending Illuminated Consciousness, preferring, as is custom in his native region of India, a single name: Pradeep. "Like Prince or Madonna," he explains.

On this particular spring day, he's in New York to offer a presentation at the 75th Advertising Research Foundation conference. As he holds court on a small stage in a ballroom of the Marriott Marquis in Midtown, Pradeep seems to relish the spotlight. Swizzle-stick thin and topped with unruly jet-black hair, the effusive 48-year-old is sharply dressed, from his spectacles to his black jacket and red-and-black silk shirt, and all the way down to his shiny boots. He stands out, needless to say, from the collective geekdom gathered at this egghead advertising fest.

Speaking with the speed and percussive enunciation of an auctioneer, Pradeep is at the conference today to introduce his company's latest innovation: a product called Mynd, the world's first portable, wireless electroencephalogram (EEG) scanner. The skullcap-size device sports dozens of sensors that rest on a subject's head like a crown of thorns. It covers the entire area of the brain, he explains, so it can comprehensively capture synaptic waves; but unlike previous models, it doesn't require messy gel. What's more, users can capture, amplify, and instantaneously dispatch a subject's brain waves in real time, via Bluetooth, to another device--a remote laptop, say, an iPhone, or that much-beloved iPad. Over the coming months, Neuro-Focus plans to give away Mynds to home panelists across the country. Consumers will be paid to wear them while they watch TV, head to movie theaters, or shop at the mall. The firm will collect the resulting streams of data and use them to analyze the participants' deep subconscious responses to the commercials, products, brands, and messages of its clients. NeuroFocus data crunchers can then identify the products and brands that are the most appealing (and the ones whose packaging and labels are dreary turnoffs), the characters in a Hollywood film that engender the strongest emotional attachments, and the exact second viewers tune out an ad.

These corporations share the same goal: to mine your brain so they can blow your mind with products you deeply desire.

Pradeep and his team in Berkeley are hardly the first to make a direct connection between brain function and how it determines consumer behavior. Advertisers, marketers, and product developers have deployed social psychology for decades to influence whether you buy Coke or Pepsi, or a small or an extra-large popcorn. Like the feather weight of that mobile phone? Suddenly gravitating to a new kind of beer at the store? Inexplicably craving a bag of Cheetos? From eye-deceiving design to product placement gimmickry, advertisers and marketers have long exploited our basic human patterns, the ones that are as rudimentary and predictable as Pavlov's slobbering dog.

NeuroFocus, however, promises something deeper, with unprecedented access into the nooks and crannies of the subconscious. It's a tantalizing claim, given that businesses spend trillions of dollars each year on advertising, marketing, and product R&D, and see, by some estimates, 80% of all their new products fail. The hope that neuroscience can provide more accurate results than traditional focus groups and other traditional market research is why Citi, Google, HP, and Microsoft, as well as soda companies, brewers, retailers, manufacturers, and media companies have all become NeuroFocus clients in the past six years. When salty-snack purveyor Frito-Lay looked to increase sales of its single-serve 100-calorie snacks to women, it tapped NeuroFocus, whose research informed new packaging and a new ad campaign. CBS partnered with the firm to measure responses to new shows and TV pilots; Arts & Entertainment (A&E) had NeuroFocus track viewers' second-by-second neurological reactions to commercials to ensure that its programs work with the ads that fund them; and Pradeep's team helped ESPN display the logos of its corporate advertisers more effectively on-air. California Olive Ranch had NeuroFocus test its olive-oil labels for maximum appeal. And, as we'll see later, Intel hired the company to better understand its global branding proposition, while PayPal sought a more refined corporate identity.

These corporations vary widely, but they share a fundamental goal: to mine your brain so they can blow your mind with products you deeply desire. With NeuroFocus's help, they think they can know you better than you know yourself.

Orange cheese dust. That wholly unnatural neon stuff that gloms onto your fingers when you're mindlessly snacking on chips or doodles. The stuff you don't think about until you realize you've smeared it on your shirt or couch cushions--and then keep on eating anyway, despite your better intentions. Orange cheese dust is probably not the first thing you think of when talking about how the brain functions, but it's exactly the kind of thing that makes NeuroFocus, and neuromarketing in general, such a potentially huge and growing business. In 2008, Frito-Lay hired NeuroFocus to look into Cheetos, the junk-food staple. After scanning the brains of a carefully chosen group of consumers, the NeuroFocus team discovered that the icky coating triggers an unusually powerful response in the brain: a sense of giddy subversion that consumers enjoy over the messiness of the product. In other words, the sticky stuff is what makes those snacks such a sticky brand. Frito-Lay leveraged that information into its advertising campaign for Cheetos, which has made the most of the mess. For its efforts, NeuroFocus earned a Grand Ogilvy award for advertising research, given out by the Advertising Research Foundation, for "demonstrating the most successful use of research in the creation of superior advertising that achieves a critical business objective."

This seemingly precise way of unveiling the brain's inner secrets is the ultimate promise of neuromarketing, a science (or perhaps an art) that picks up electrical signals from the brain and spins them through software to analyze the responses and translate those signals into layman's terms. While evolving in tandem with advances in neuroscience, the field owes much to a study conducted at the Baylor College of Medicine in 2004 to investigate the power of brand perception on consumer taste preferences. Based on the famous Coke vs. Pepsi tests of yesteryear, volunteers had their brains scanned in an MRI as they sampled each beverage. When they didn't know what they were drinking, half liked Coke and half liked Pepsi. When they did know, however, most preferred Coke, and their brain scans showed a great deal of activity in the cranial areas associated with memory and emotion. In other words, the power of Coke's brand is so great that it preps your brain to enjoy its flavor--and presumably to influence your purchasing decisions when you're in the supermarket.

Since the Baylor study, neurotesters have turned to the EEG as their standard measurement tool, rather than the MRI. For starters, the MRI is bulkier, harder to administer, and expensive. Far more important, however, is the fact that an EEG measures the brain's electrical activity on the scalp, while an MRI records changes in blood flow inside the brain. This means that an EEG reading can be done almost in real time, while an MRI's has a five-second delay. MRIs provide beautiful, high-resolution pictures, ideal for identifying tumors and other abnormalities, but they are useless for tracking quick-hit reactions.

For example, imagine that you are asked to generate an action verb in response to the word ball. Within 200 milliseconds, your brain has absorbed the request. Impulses move to the motor cortex and drive your articulators to respond, and you might say "throw." This process happens far too fast for an MRI to record. But an EEG can capture virtually every neurological impulse that results from that single word: ball.

This is where modern neuromarketing exists--at the very creation of an unconscious idea, in the wisp of time between the instant your brain receives a stimulus and subconsciously reacts. There, data are unfiltered, uncorrupted by your conscious mind, which hasn't yet had the chance to formulate and deliver a response in words or gestures. During this vital half-second, your subconscious mind is free from cultural bias, differences in language and education, and memories. Whatever happens there is neurologically pure, unlike when your conscious mind takes over and actually changes the data by putting them through myriad mental mechanisms. It's all the action inside you before your conscious mind does the societally responsible thing and reminds you that artificially flavored and colored cheese dust laced with monosodium glutamate is, well, gross.

With the instantaneous readings of EEG sensors, neuromarketers can track electrical waves as they relate to emotion, memory, and attention from specific areas of the brain: namely, the amygdala, an almond-shaped region that plays a role in storing emotionally charged memories and helps trigger physical reactions (sweaty palms, a faster heartbeat); the hippocampus, where memory lurks; and the lateral prefrontal cortex, which governs high-level cognitive powers (one being attention). Once the brain waves are collected, complex algorithms can sift through the data to connect each reaction to a specific moment.

Neuromarketers like Pradeep argue that this testing is much more efficient, cost effective, and precise than traditional methods like focus groups. While Gallup must poll roughly a thousand people to achieve a 4% margin of error, NeuroFocus tests just two dozen subjects for its corporate clients--and even that is a sample size larger than those deployed by leading academic neuroscience labs. This is possible because people's brains are remarkably alike, even though there are some differences between male and female brains, and between those of children and senior citizens. And NeuroFocus collects a massive amount of input, recording and analyzing billions of data points during a typical neurological testing project. This is the genius of neuromarketing, according to a booster like Pradeep. He promises an accurate read of the subconscious mind. Focus groups and surveys, on the other hand, give an imprecise measure of the conscious mind, of so-called articulated, or self-reported, responses. They are one step removed from actual emotion, inherently weak: like flashbacks in a film. They are fine for eliciting facts, less so for probing into what people really feel.

Ray Poynter of the Future Place saves his harshest criticism for neuromarketers: "They are overclaiming massively."

Not everyone agrees that neuromarketing is the next great thing, of course. Because its research has been primarily corporate funded and its tangible results primarily anecdotal, neuromarketing is not without detractors, who tend to lump it in with the array of businesses, like biometrics or facial mapping, that promise all sorts of new-wave marketing breakthroughs. Ray Poynter, founder of the Future Place, a social-media consultancy in Nottingham, England, colors himself a skeptic on all of them but saves his harshest criticism for neuromarketers. He believes they offer far more hype than science. "Neuromarketers are overclaiming massively," he says. "While it is likely to reduce the number of bad mistakes, and slightly increase the chance of good things happening, it's all a matter of degree."

Even so, it's hard to imagine neuromarketing proving less reliable than traditional market research. For decades, marketers have relied on focus groups and surveys to divine what consumers want, using these methods to solicit feedback on their attitudes, beliefs, opinions, and perceptions about an advertisement, a product and its packaging, or a service. Each year, hundreds of thousands of focus groups are organized around the world, and about $4.5 billion is spent globally on qualitative market research.

This kind of "mother-in-law research," as ad exec Kirk Cheyfitz calls it, has all manner of shortcomings. It's not statistically significant, so it's risky to graft your findings onto the population at large. One or two blowhards may hijack an entire panel, and researchers can, without knowing it, influence participants. The world has changed, and yet so much market research is still conducted the same old way.

Brain Eaters

Companies try to keep their neuromarketing efforts secret.

Here are six that we flushed out.

[Illustration by Superexpresso]

"I bet you, long ago if you looked at cave paintings, there were a bunch of Cro-Magnon men and women sitting around a fire in focus groups wondering whether to go hunt mastodon that night," Pradeep says. "Today, our focus groups are no different." In the tale of our inner lives, we have always been unreliable narrators. Pradeep believes he can get at the truth.

When David Ginsberg joined Intel in 2009 as the company's director of insights and market research, he was something of an expert on the slippery nature of "truth," having spent 15 years working on political campaigns for John Edwards, John Kerry, Al Gore, and Bill Clinton. Ginsberg was downright skeptical of neuromarketing, or, as he calls it, "nonconscious-based research." He thought it had more to do with science fiction than reality. But he also knew that Intel had been conducting market research as if it were still 1965, with surveys that were the equivalent of sending Gallup off to knock on thousands of doors. That may have worked decently in the days when a person bought a computer based on specs--processing speed, RAM, etc. But in an age where virtually every computer is sold with power to spare, Ginsberg knew that the rationale for buying a certain computer was as much emotional as it was rational. To compete in this new market, Intel the company had to understand how people felt about Intel the brand.

"If you ask people if they know Intel, something like 90% will say they know Intel," Ginsberg says. "Ask if they like Intel, a huge percentage will say they like Intel. Ask them [to rank or name] tech leaders, however, and we come out much lower on the list." Ginsberg felt that he needed to understand consumers' feelings at a deeper level: What words did consumers associate with Intel? Were these associations altered by one's culture? Ginsberg decided to run pilot tests with a number of market-research firms, and despite his sense of neuromarketing as mumbo jumbo, he included NeuroFocus. What he learned surprised him and turned him into a believer.

NeuroFocus structured its test for Intel as it does most of its market research, patterning it after something called the Evoked Response Potential test, a staple of neuroscience. Test subjects were paid to come to a NeuroFocus lab and put on a cap with 64 sensors that would measure electrical activity across the brain. Because the U.S. and China are two very important markets for Intel, NeuroFocus tested groups of 24 consumers (half men, half women) in Berkeley and in a midsize city in China's Sichuan Province.

In a quiet room, each test subject was shown the words "achieve," "possibilities," "explore," "opportunity," "potentiality," "identify," "discover," "resolves," and "solves problems." Each flitted by on a TV screen at half-second intervals. The subject was instructed to press a button whenever she saw a word with a letter underscored by a red dot. After several minutes of this subconscious-priming word test, she was shown a few Intel ads. Following this, the words were again presented on the screen, this time without the dots.

The exercise served two functions: First, the red dots focused the subject's attention; second, they gave NeuroFocus a baseline measure of the brain's response, since each time a test subject saw the red dot, her brain went "A-ha! There's a word with a red dot." Click.

Pradeep knows how to read your mind--so long as you put on that device first. | Photo by Gene Lee

When NeuroFocus later analyzed the EEG readings, it looked for those same "a-ha" moments from the period during which the subject had viewed the Intel ads. The words that provoked the most such responses were "achieve" and "opportunity." Interestingly, women in the U.S. and in China had virtually the same response post-advertisements, as did American men and Chinese men. The differences were in the genders; on both sides of the pond, men and women had strikingly different reactions. "Achieve" prompted the most intense reaction among women, while men gravitated toward "opportunity. "

.Says Ginsberg: "This was incredibly fascinating to us. There seem to be fundamental values across humanity." He believes that Intel would have never learned this through traditional market research and focus groups, where cultural biases come into play. He also concluded that there are differences in how men and women think, and that these differences cross cultural boundaries. This is not news to Pradeep, who points out that male and female brains are different, and not in a Larry Summers women-aren't-as-good-at-math-and-science-as-men-are kind of way. The female brain is our default brain when we are in the womb. But at week eight, about half of all fetuses are bathed in testosterone. These now-male brains close down certain communication centers in the brain while opening up others geared toward sex and aggression. In female brains, meanwhile, the communication pathways continue to evolve, intricate neural routes are constructed across both hemispheres, and areas dedicated to emotion blossom. Life seems to imitate a beer commercial, doesn't it?

Now Intel is changing its marketing strategy. "A brand that helps people achieve and offers opportunity has a phenomenal brand attribute," Ginsberg says. "It gives you a new perspective on things, to understand your consumer better." The NeuroFocus findings have informed the next round of creative advertising you'll see from Intel, due to emerge later this year. "I guarantee when you see these ads you'll see a straight line," Ginsberg adds. "The study gave us fresh insights to talk about things we didn't have permission to talk about before."

It is conceivable that Intel could have redirected its advertising toward achievement and opportunity with the help of focus groups. But Ginsberg feels, and Pradeep fervently believes, that neuromarketing has a much better shot at getting closer to the unconscious truth, and therefore proving more effective. Still, the difference between the two forms of research sometimes seems to be just a matter of degrees.

Barry Herstein left American Express to join PayPal in October 2007 as global chief marketing officer with the goal of giving eBay's transaction-processing division a coherent marketing strategy. After the first few weeks, he knew just how difficult the task would be. Almost every time he asked a PayPal employee, "What's the big idea behind PayPal?" the following response came back: "Safe, simple, wow!"

"Safe, simple, wow?" Herstein scoffs. "That's not a big idea. It's a tagline." It didn't even make sense. Wasn't any payment product supposed to be safe and simple? He supposed that software engineers might know that paying for things was complicated, but having worked at American Express and Citi, he knew that the consumer didn't think that was the case. And "wow"? He cringed. Then, after a series of brainstorming sessions and conversations with a broad range of customers, he hired NeuroFocus to help him figure out the basic concepts around which he could build a new global identity for PayPal.

As part of its standard methodology, NeuroFocus captures the subconscious resonance consumers have for seven brand attributes: form, function, and benefits, as well as feelings (the emotional connection a brand elicits from consumers), values (what it represents), metaphors (aspirations, challenges, lessons, or life events that seem connected to the product), and extensions (the unexpected and perhaps illogical feelings it inspires). Based on his earlier brainstorming sessions, Herstein asked NeuroFocus to home in on three attributes and create three phrases for testing within each. For function he offered "convenient," "fast," and "secure"; for feelings, "confident," "hassle-free," and "in the know"; and for benefits, "new opportunity," "on my side," and "empowering." The 21-person panel had 11 men and 10 women and was also segmented into regular, light, and non-PayPal shoppers.

According to NeuroFocus, "fast" ranked the highest in the function category. (Notably, "fast" was not acknowledged in any way by "safe, simple, wow.") In fact, according to the brain heat map that NeuroFocus created from the aggregated data, speed is a huge advantage that sets off extremely positive feelings, especially from regular users. The more people use PayPal, it seems, the more they appreciate how quickly they can close transactions. For the feelings category, "in the know" resonated best, and in benefits, "on my side" won out.

Examining brand attributes is a standard of traditional market testing, of course. Herstein ran a parallel, more conventional track at the same time as his NeuroFocus study, creating a conventional online survey. The results were significantly different. While the word "fast" resonated with this group, the phrase "on my side" wound up at the bottom of the benefits category, which was topped instead by "confident"--a word that had finished dead last among men in the NeuroFocus study.

Herstein trusted the NeuroFocus results, though, and set out to create a coherent global image for the company based on them. That image would humanize PayPal by emphasizing the outcomes it delivers, not the act of paying; nowhere in the new marketing would you find any dreaded, dreary images of two people hovering around a computer. "People don't want to see that," Herstein says. "They want to see people enjoying either what they just bought or the time that it gives them by paying fast."

Not everyone at the company was sold on his new approach. The heads of some foreign markets--Herstein declined to name which-- predicted that the new campaign would bomb. Herstein says that his boss, PayPal president Scott Thompson, told him he was crazy--but Herstein was willing to stake his reputation on the new approach.

What happened? According to Herstein, when he changed PayPal's visual and verbal identity across the company's email and web pages, click-through and response rates increased three to four times. "I'm telling you, in the world of direct marketing, the words '400% improvement' don't exist," he says. "If you can go from 1.2% response rate to 1.3%, you'll get a promotion, right? And if you can take something from a 4% response rate to 16%? Unheard of."

"The mystique is that there's a way to turn consumers into robots to buy products. That's simply not the case," says one neuromarketer.

Herstein has left PayPal to join Snapfish and now sits on NeuroFocus's board as an unpaid adviser. While eBay confirms the basics of his account, it won't confirm his description of the outcomes from the marketing campaign he created; a spokesman repeatedly asked Fast Company not to include this information in our story.

This bid for secrecy is entirely in keeping with the aura around neuromarketing, an industry that is both highly confident about what it can deliver and very nervous about its perception in the broader world. Several neuromarketing firms were approached for this story, but the only one that would do more than provide vague descriptions of its work was NeuroFocus, which is by all accounts the industry leader. Out of dozens of its corporate clients, very few would agree to discuss their work with the firm.

Neuromarketing outfits are afraid of being branded as trendy voodoo science, no more trustworthy than palm readers. Such a perception, they believe, will wither with good results. Perhaps more worrying is the other end of the speculative spectrum, which posits that corporations armed with our neurological data will be able to push a secret "buy button" in our brains. This is a fear promulgated by, among others, Paul B. Farrell, a columnist for Dow Jones and author of The Millionaire Code. He calls this buy button your brain's "true decision-making processor," a "weapon of mass delusion." You end up like a computer "without virus protection" and "exposed to every Wall Street banker, politician, and corporate CEO with gobs of cash and a desire to manipulate your brain."

"There's still this mystique that there's a way to control consumers and turn them into robots to purchase products," says Ron Wright, president and CEO of Sands Research, a rival neuromarketing firm based in El Paso, Texas. "That is simply not the case." Nevertheless, after spending time with Pradeep, you get the feeling we've only just begun to tap the potential of this new movement. Pradeep is not a neuroscientist. He's a former GE engineer and consultant who became fascinated by neuromarketing after a conversation with a neuroscientist who sat next to him on a cross-continental flight. After seven years at the helm of NeuroFocus, he sees every product relationship in terms of the brain, like a virtuoso musician who hears music in everyday sounds, from the clackety noise of a woman's heels on a wooden floor to the melange of notes from a car engine.

On a sun-drenched afternoon in Berkeley, we tour the shops at the local mall. We stop in front of a Victoria's Secret plate-glass window and Pradeep points out the ambiguous expression of a lingerie model on one of its posters. He explains that the brain is constantly looking out for our survivaland as part of that is always ready to measure another person's intent. Is that stranger happy? Angry? Sad? When an expression is not easy to decipher, we do a database search through our collection of faces--curious, worried, nervous, threatening--to choose which is closest to the one we see, and match it. "If the expression is easy to decipher, I hardly glance," he says. "But if the expression is relatively hard to decipher, she makes me open the cupboard of memory." Contrast this with the nearby Bebe store, where Pradeep shakes his head at the headless mannequins in the window. "Now that's what I call a crime against humanity. Money down the drain."

At the Apple store, we pause at a desktop computer and he explains why it's always better to put images on the left side of the screen and text on the right: "That's how the brain likes to see it," he says. "If you flip it around, the right frontal looks at the words and has to flip it over the corpus callosum to the left frontal lobe. You make the brain do one extra step, and the brain hates you for that." Pradeep loves Apple, and he loves to talk about Apple, in part because Steve Jobs never has been and probably never will be a client. (Apple doesn't even use focus groups. Jonathan Ive, Apple's top designer, famously said they lead to bland products designed to offend no one.) But the real reason he loves talking about Apple is that he believes the company has elevated basic design to high art, a hugely successful strategy that Pradeep thinks is justified by our most basic neurological underpinnings.

Which brings us back to that iPad. Pradeep claims the brain loves curves but detests sharp edges, which set off an avoidance response in our subconscious. In the same way our ancestors stood clear of sticks or jagged stones fashioned into weapons, we avoid sharp angles, viewing them as potential threats. NeuroFocus has performed several studies for retailers and food manufacturers and found that test subjects prefer in-store displays with rounded edges over those with sharper edges. In one instance, when these new rounded displays were rolled out to replace traditional store shelving, sales rose 15%.

But curved edges are only one reason for the iPad's success. We also like how the tablet feels, how sleek and well balanced it is. Signals generated by our palms and fingers, along with lips and genitals, take up the most surface area within our brain's sensory zone. The way a product feels in our hands can be a major selling point. It's why we prefer glass bottles to cans, which NeuroFocus product-consumption studies bear out, although it's not just the material, it's also the slender curve of the bottle and the ridges in it. The touch screen, too, is a mental magnet and can induce those hormonal secretions Pradeep likes describing.

Why we like these curves no one knows for sure. Perhaps our brains correlate curves with nourishment--that is to say, mommy. (Calling Dr. Freud.) In men, it could be sexual. One study asked men to view before-and-after pictures of naked women who underwent cosmetic surgery to shrink their waists and add to their derrieres. The men's brains responded as if they had been rewarded with drugs and alcohol. But this response to curves may be even more primal than sex, or beer. Another study suggested that men seek women with curves because women's hips and thighs contain higher doses of omega-3 fatty acids, which nurture babies' brains and lead to healthier offspring.

This is the flip side to our fears of neuromarketing: the potential to look at our choices in a new way that blends science, psychology, and history. Lately, NeuroFocus has been moving into product development, providing research to companies that will influence how products look, feel, and function before they hit the market. That's what the firm is doing with its Mynd crown of sensors. But Pradeep has visions that go far beyond testing products, packaging, and commercials. He imagines neurotesting as ideal for court-room trials: A defense attorney could pretest opening and closing arguments for emotional resonance with mock juries. And while NeuroFocus is not getting involved in politics, he says that competitors of his helped Republican politicians shape their messages for the 2010 midterm elections.

One stunning application of neurotesting is the work of Robert Knight, Pradeep's chief science officer, and a host of other neuroscience researchers who are trying to develop a way for quadriplegics to control their wheel-chairs just by thinking alone. When you watch someone move a hand to grab a can of soda, mirror neurons in your brain react as if you were grasping it yourself. Knight is studying which brain signals can be translated into software commands to drive a wheelchair. To further this research, Knight, part of the team that invented the Mynd, plans to give it away to scientists and labs around the world. And the next iteration, he promises, will be a big step up, with eye-tracking capability, a built-in video camera, and three times as many sensors for greater brain coverage. "If our limbs will not respond to the beauty of your thinking or your feeling, that is a horror beyond horrors," Pradeep says. "Restoring a little bit of gesture, a little bit of movement, a little bit of control to that beautiful mind is an extraordinary thing to do."

He seems sincere, passionate even, though of course I cannot read his mind.

A version of this article appears in the September 2011 issue of Fast Company.

Correction: Neurofocus is owned by Nielsen Holdings N.V., not Nielsen Research as stated in the original article.

Monday, 19 June 2017

Another Bank Bailout

By PAUL KRUGMAN

Published: June 10, 2012

Oh, wow — another bank bailout, this time in Spain. Who could have predicted that?

The answer, of course, is everybody. In fact, the whole story is starting to feel like a comedy routine: yet again the economy slides, unemployment soars, banks get into trouble, governments rush to the rescue — but somehow it’s only the banks that get rescued, not the unemployed.

Just to be clear, Spanish banks did indeed need a bailout. Spain was clearly on the edge of a “doom loop” — a well-understood process in which concern about banks’ solvency forces the banks to sell assets, which drives down the prices of those assets, which makes people even more worried about solvency. Governments can stop such doom loops with an infusion of cash; in this case, however, the Spanish government’s own solvency is in question, so the cash had to come from a broader European fund.

So there’s nothing necessarily wrong with this latest bailout (although a lot depends on the details). What’s striking, however, is that even as European leaders were putting together this rescue, they were signaling strongly that they have no intention of changing the policies that have left almost a quarter of Spain’s workers — and more than half its young people — jobless.

Most notably, last week the European Central Bank declined to cut interest rates. This decision was widely expected, but that shouldn’t blind us to the fact that it was deeply bizarre. Unemployment in the euro area has soared, and all indications are that the Continent is entering a new recession. Meanwhile, inflation is slowing, and market expectations of future inflation have plunged. By any of the usual rules of monetary policy, the situation calls for aggressive rate cuts. But the central bank won’t move.

And that doesn’t even take into account the growing risk of a euro crackup. For years Spain and other troubled European nations have been told that they can only recover through a combination of fiscal austerity and “internal devaluation,” which basically means cutting wages. It’s now completely clear that this strategy can’t work unless there is strong growth and, yes, a moderate amount of inflation in the European “core,” mainly Germany — which supplies an extra reason to keep interest rates low and print lots of money. But the central bank won’t move.

Meanwhile, senior officials are asserting that austerity and internal devaluation really would work if only people truly believed in their necessity.

Consider, for example, what Jörg Asmussen, the German representative on the European Central Bank’s executive board, just said in Latvia, which has become the poster child for supposedly successful austerity. (It used to be Ireland, but the Irish economy keeps refusing to recover). “The key difference between, say, Latvia and Greece,” Mr. Asmussen said, “lies in the degree of national ownership of the adjustment program — not only by national policy-makers but also by the population itself.”

Oh, and that Latvian success consists of one year of pretty good growth following a Depression-level economic decline over the previous three years. True, 5.5 percent growth is a lot better than nothing. But it’s worth noting that America’s economy grew almost twice that fast — 10.9 percent! — in 1934, as it rebounded from the worst of the Great Depression. Yet the Depression was far from over.

Put all of this together and you get a picture of a European policy elite always ready to spring into action to defend the banks, but otherwise completely unwilling to admit that its policies are failing the people the economy is supposed to serve.

Still, are we much better? America’s near-term outlook isn’t quite as dire as Europe’s, but the Federal Reserve’s own forecasts predict low inflation and very high unemployment for years to come — precisely the conditions under which the Fed should be leaping into action to boost the economy. But the Fed won’t move.

What explains this trans-Atlantic paralysis in the face of an ongoing human and economic disaster? Politics is surely part of it — whatever they may say, Fed officials are clearly intimidated by warnings that any expansionary policy will be seen as coming to the rescue of President Obama. So, too, is a mentality that sees economic pain as somehow redeeming, a mentality that a British journalist once dubbed “sado-monetarism.”

Whatever the deep roots of this paralysis, it’s becoming increasingly clear that it will take utter catastrophe to get any real policy action that goes beyond bank bailouts. But don’t despair: at the rate things are going, especially in Europe, utter catastrophe may be just around the corner.

A version of this op-ed appeared in print on June 11, 2012, on page A19 of the New York edition with the headline: Another Bank Bailout.

Economics and Morality

:

Paul Krugman's Framing

Lakoff and Wehling are authors of The Little Blue Book: The Essential Guide to Thinking and Talking Democratic, where morally-based framing is discussed in great detail.

In his June 11, 2012 op-ed in the New York Times, Paul Krugman goes beyond economic analysis to bring up the morality and the conceptual framing that determines economic policy. He speaks of "the people the economy is supposed to serve" -- "the unemployed," and "workers"-- and "the mentality that sees economic pain as somehow redeeming."

Krugman is right to bring these matters up. Markets are not provided by nature. They are constructed -- by laws, rules, and institutions. All of these have moral bases of one sort or another. Hence, all markets are moral, according to someone's sense of morality. The only question is, Whose morality? In contemporary America, it is conservative versus progressive morality that governs forms of economic policy. The systems of morality behind economic policies need to be discussed.

Most Democrats, consciously or mostly unconsciously, use a moral view deriving from an idealized notion of nurturant parenting, a morality based on caring about their fellow citizens, and acting responsibly both for themselves and others with what President Obama has called "an ethic of excellence" -- doing one's best not just for oneself, but for one's family, community, and country, and for the world. Government on this view has two moral missions: to protect and empower everyone equally.

The means is The Public, which provides infrastructure, public education, and regulations to maximize health, protection and justice, a sustainable environment, systems for information and transportation, and so forth. The Public is necessary for The Private, especially private enterprise, which relies on all of the above. The liberal market economy maximizes overall freedom by serving public needs: providing needed products at reasonable prices for reasonable profits, paying workers fairly and treating them well, and serving the communities to which they belong. In short, "the people the economy is supposed to serve" are ordinary citizens. This has been the basis of American democracy from the beginning.

Conservatives hold a different moral perspective, based on an idealized notion of a strict father family. In this model, the father is The Decider, who is in charge, knows right from wrong, and teaches children morality by punishing them painfully when they do wrong, so that they can become disciplined enough to do right and thrive in the market. If they are not well-off, they are not sufficiently disciplined and so cannot be moral: they deserve their poverty. Applied to conservative politics, this yields a moral hierarchy with the wealthy, morally disciplined citizens deservedly on the top.

Democracy is seen as providing liberty, the freedom to seek one's self interest with minimal responsibility for the interests or well-being of others. It is laissez-faire liberty. Responsibility is personal, not social. People should be able to be their own strict fathers, Deciders on their own -- the ideal of conservative populists, who are voting their morality not their economic interests. Those who are needy are assumed to be weak and undisciplined and therefore morally lacking. The most moral people are the rich. The slogan, "Let the market decide," sees the market itself as The Decider, the ultimate authority, where there should be no government power over it to regulate, tax, protect workers, and to impose fines in tort cases. Those with no money are undisciplined, not moral, and so should be punished. The poor can earn redemption only by suffering and thus, supposedly, getting an incentive to do better.

If you believe all of this, and if you see the world only from this perspective, then you cannot possibly perceive the deep economic truth that The Public is necessary for The Private, for a decent private life and private enterprise. The denial of this truth, and the desire to eliminate The Public altogether, can unfortunately come naturally and honestly via this moral perspective.

When Krugman speaks of those who have "the mentality that sees economic pain as somehow redeeming," he is speaking of those who have ordinary conservative morality, the more than forty percent who voted for John McCain and who now support Mitt Romney -- and Angela Merkel's call for "austerity" in Germany. It is conservative moral thought that gives the word "austerity" a positive moral connotation.

Just as the authority of a strict father must always be maintained, so the highest value in this conservative moral system is the preservation, extension, and ultimate victory of the conservative moral system itself. Preaching about the deficit is only a means to an end -- eliminating funding for The Public and bringing us closer to permanent conservative domination. From this perspective, the Paul Ryan budget makes sense -- cut funding for The Public (the antithesis of conservative morality) and reward the rich (who are the best people from a conservative moral perspective). Economic truth is irrelevant here.

Historically, American democracy is premised on the moral principle that citizens care about each other and that a robust Public is the way to act on that care. Who is the market economy for? All of us. Equally. But with the sway of conservative morality, we are moving toward a 1 percent economy -- for the bankers, the wealthy investors, and the super rich like the six members of the family that owns Walmart and has accumulated more wealth than the bottom 30 percent of Americans. Six people!

What is wrong with a 1 percent economy? As Joseph Stiglitz has pointed out in The Price of Inequality, the 1 percent economy eliminates opportunity for over a hundred million Americans. From the Land of Opportunity, we are in danger of becoming the Land of Opportunism.

If there is hope in our present situation, it lies with people who are morally complex, who are progressive on some issues and conservative on others -- often called "moderates," "independents," and "swing voters." They have both moral systems in their brains: when one is turned on, the other is turned off. The one that is turned on more often gets strongest. Quoting conservative language, even to argue against it, just strengthens conservatism in the brain of people who are morally complex. It is vital that they hear the progressive values of the traditional American moral system, the truth that The Public is necessary for The Private, the truth that our freedom depends on a robust Public, and that the economy is for all of us.

We must talk about those truths -- over and over, every day. To help, we have written The Little Blue Book. It can be ordered from Barnes & Noble, Amazon, and iTunes, and after June 26 at your local bookstore.

.The Little Blue Book

:

The Essential Guide to Thinking and Talking Democratic

.

The indispensable handbook for Democrats

Voters cast their ballots for what they believe is right, for the things that make moral sense. Yet Democrats have too often failed to use language linking their moral values with their policies. The Little Blue Book demonstrates how to make that connection clearly and forcefully, with hands-on advice for discussing the most pressing issues of our time: the economy, health care, women’s issues, energy and environmental policy, education, food policy, and more. Dissecting the ways that extreme conservative positions have permeated political discourse, Lakoff and Wehling show how to fight back on moral grounds and in concrete terms. Revelatory, passionate, and deeply practical, The Little Blue Bookwill forever alter the way Democrats and progressives think and talk about politics.

la Facoltà Immaginativa:

la capacità di rappresentarsi oggetti mentali davanti agli occhi .

Vision incites the courage to be, to act, to succeed.

It gives us the passion for the possible.

Vision is also the picture of the future, the promised land, whether it is Moses leading the Israelites out of Egypt or Martin Luther King Jr. offering a land where people are judged not by the color of their skin but by the content of their character.

Our future is not Armageddon, nor need it be a crazy patchwork quilt of Band-Aids covering insoluble problems.

Our future can be a unique civilization that honors diversity, moves from fixing to finding, from debate to conversation, from despair to discovery, nonviolence as a way of life and the maintenance of contracts between ever larger groups.

Our future can be the New Story.

Jean Houston is co-director of the Foundation for Mind Research and an eminent scholar, philosopher, and teacher. She has written numerous books drawing on thirty-five years of research on human development and exploring ways of developing extraordinary capacities in people and organizations throughout the world.

The most successful leaders will be those who openly address their fears and anxieties in dialogue with their people; visibly and by example act like we are truly all in this together; demonstrate their sacrifices when employees’ jobs are sacrificed; remain stable in the face of change; and, coach their people to learn and grow in the face of uncertainty, anxiety, and fear.

This proactive questioning, experimenting and modeling by leaders will create the new rules and roadmaps of the future that will guide them and us through crises and uncertainty.

anche se poi, come scrive Drucker stesso,

"Il miglior modo per predire il futuro è quello di crearlo."

Il riformismo può salvare l'Italia ?

Ecco i punti del cambiamento

Walter Veltroni

di WALTER VELTRONI

(Intero articolo tra i Commenti)

STRATEGIC VISION

A specialist was hired to develop and present a series of half-day training seminars on empowerment and teamwork for the managers of a large international oil company. Fifteen minutes into the first presentation, he took a headlong plunge into the trap of assumption. With great intent, he laid the groundwork for what he considered the heart of empowerment-team-building, family, and community. He praised the need for energy, commitment, and passion for production. At what he thought was the appropriate time, he asked the group of 40 managers the simple question on which he was to ground his entire talk:

"What is the vision of your company?"

No one raised a hand. The speaker thought they might be shy, so he gently encouraged them. The room grew deadly silent. Everyone was looking at everyone else, and he had a sinking sensation in his stomach. "Your company does have a vision, doesn't it?" he asked. A few people shrugged, and a few shook their heads. He was dumbfounded. How could any group or individual strive toward greatness and mastery without a vision? That's exactly the point. They can't. They can maintain, they can survive; but they can't expect to achieve greatness.

Vision is a widely used term, but not well understood.

Perhaps leaders don't understand what vision is, or why it is important.

One strategic leader is quoted as saying, "I've come to believe that we need a vision to guide us, but I can't seem to get my hands on what 'vision' is. I've heard lots of terms like mission, purpose, values, and strategic intent, but no-one has given me a satisfactory way of looking at vision that will help me sort out this morass of words. It's really frustrating!" (Collins and Porras 1991). To understand vision, clarify what the term means.

DEFINING VISION

One definition of vision comes from Burt Nanus, a well-known expert on the subject. Nanus defines a vision

as a realistic, credible, attractive future for [an] organization.

Let's disect this definition

:

Realistic: A vision must be based in reality to be meaningful for an organization. For example, if you're developing a vision for a computer software company that has carved out a small niche in the market developing instructional software and has a 1.5 percent share of the computer software market, a vision to overtake Microsoft and dominate the software market is not realistic!

Credible: A vision must be believable to be relevant. To whom must a vision be credible? Most importantly, to the employees or members of the organization. If the members of the organization do not find the vision credible, it will not be meaningful or serve a useful purpose. One of the purposes of a vision is to inspire those in the organization to achieve a level of excellence, and to provide purpose and direction for the work of those employees. A vision which is not credible will accomplish neither of these ends.

Attractive: If a vision is going to inspire and motivate those in the organization, it must be attractive. People must want to be part of this future that's envisioned for the organization.

Future: A vision is not in the present, it is in the future. In this respect, the image of the leader gazing off into the distance to formulate a vision may not be a bad one. A vision is not where you are now, it's where you want to be in the future. (If you reach or attain a vision, and it's no longer in the future, but in the present, is it still a vision?)

Nanus goes on to say that the right vision for an organization, one that is arealistic, credible, attractive future for that organization, can accomplish a number of things for the organization

:

It attracts commitment and energizes people. This is one of the primary reasons for having a vision for an organization: its motivational effect. When people can see that the organization is committed to a vision-and that entails more than just having a vision statement-it generates enthusiasm about the course the organization intends to follow, and increases the commitment of people to work toward achieving that vision.

It creates meaning in workers' lives. A vision allows people to feel like they are part of a greater whole, and hence provides meaning for their work. The right vision will mean something to everyone in the organization if they can see how what they do contributes to that vision. Consider the difference between the hotel service worker who can only say, "I make beds and clean bathrooms," to the one who can also say, "I'm part of a team committed to becoming the worldwide leader in providing quality service to our hotel guests." The work is the same, but the context and meaning of the work is different.

It establishes a standard of excellence. A vision serves a very important function in establishing a standard of excellence. In fact, a good vision is all about excellence. Tom Peters, the author of In Search of Excellence, talks about going into an organization where a number of problems existed. When he attempted to get the organization's leadership to address the problems, he got the defensive response, "But we're no worse than anyone else!" Peters cites this sarcastically as a great vision for an organization: "Acme Widgets: We're No Worse Than Anyone Else!" A vision so characterized by lack of a striving for excellence would not motivate or excite anyone about that organization. The standard of excellence also can serve as a continuing goal and stimulate quality improvement programs, as well as providing a measure of the worth of the organization.

It bridges the present and the future. The right vision takes the organization out of the present, and focuses it on the future. It's easy to get caught up in the crises of the day, and to lose sight of where you were heading. A good vision can orient you on the future, and provide positive direction. The vision alone isn't enough to move you from the present to the future, however. That's where a strategic plan, discussed later in the chapter, comes in. A vision is the desired future state for the organization; the strategic plan is how to get from where you are now to where you want to be in the future.

Another definition of vision comes from Oren Harari

:

"Vision should describe a set of ideals and priorities, a picture of the future, a sense of what makes the company special and unique, a core set of principles that the company stands for, and a broad set of compelling criteria that will help define organizational success."

Are there any differences between Nanus's and Harari's definitions of vision?

What are the similarities? Do these definitions help clarify the concept of vision and bring it into focus?

An additional framework for examining vision is put forward by Collins and Porras

:

They conceptualize vision as having two major components: a Guiding Philosophy, and a Tangible Image. They define the guiding philosophy as "a system of fundamental motivating assumptions, principles, values and tenets." The guiding philosophy stems from the organization's core beliefs and values and its purpose.

CORE BELIEFS AND VALUES

Just as they underlie organizational culture, beliefs and values are a critical part of guiding philosophy and therefore vision. One CEO expressed the importance of core values and beliefs this way:

I firmly believe that any organization, in order to survive and achieve success, must have a sound set of beliefs on whichit premises all its policies and actions. Next, I believe that the most important single factor in corporate success is faithful adherence to those beliefs. And, finally, I believe [the organization] must be willing to change everything about itself except those beliefs as it moves through corporate life. (Collins and Porras 1991)

Core values and beliefs can relate to different constituents such as customers, employees, and shareholders, to the organization's goals, to ethical conduct, or to the organization's management and leadership philosophy. Baxter Healthcare Corporation has articulated three Shared Values: Respect for their Employees, Responsiveness to their Customers, and Results for their Shareholders, skillfully linking their core values to their key constituencies and also saying something about what is important to the organization. The key, however, is whether these are not only stated but also operating values.

Collins and Porras have provided examples of core values and beliefs from a survey of industry they conducted, and cite the following examples, among others:

About People

Marriott: "See the good in people, and try to develop those qualities."

About Customers

L.L. Bean: "Sell good merchandise at a reasonable price; treat your customers like you would your friends, and the business will take care of itself."

About Products

Sony: "We should always be the pioneers with our products--out front leading the market. We believe in leading the public with new products rather than asking them what kind of products they want."

About Management and Business

Motorola: "Everything will turn out alright if we just keep in motion, forever moving forward."

PURPOSE

The second part of guiding philosophy is purpose-why the organization exists, what needs it fills. Collins and Porras believe a good purpose statement should be "broad, fundamental, inspirational, and enduring." Consider this purpose statement: "The purpose of the United States Naval Academy at Annapolis, MD, is to prepare midshipmen to become professional officers in the naval service."Or this purpose statement from Apple Computer: "To make a contribution to the world by making tools for the mind that advance humankind." How do these statements of purpose stack up?

Whether individual, team, organization or nation, a sense of purpose and direction is essential to commitment. A shared sense of purpose is the glue that binds people together in common cause, often linking each individual's goals with the organization's goals. Properly formulated, a shared sense of purpose provides understanding of the need for coordinated collective effort -- for subordinating individual interest to the larger objective that can be achieved only by the collective effort. When it is long range in nature, it is the basis for detailed planning for the allocation of resources. When it is noble and inspiring, it gives dignity and respect to those participating in the effort. And, when it promises a better future, it gives hope to all who seek it.

The second major component of vision is tangible image. This is composed of a mission and a vivid description. Mission is "a clear and compelling goal that serves to unify an organization's effort. An effective mission must stretch and challenge the organization, yet be achievable" (Collins and Porras). There are four ways of approaching developing a mission statement: targeting, common enemy, role model, and transformation. Targeting means developing your mission statement around a clear, definable goal. An example is the mission Merck Pharmaceuticals set in 1979: To establish Merck as the preeminent drug-maker worldwide in the 1980s. The common enemy approach is to focus the mission on overtaking or dominating a rival. Athletic shoe, automobile, and telecommuncations companies are all examples of areas where competition with rivals dominates company missions. Role model missions take an exemplar in another industry, and benchmark off that exemplar. For example, a mission "To be the Microsoft of retail sales companies" employs the role modeling technique. Finally, the internal transformation approach mission tends to focus on the internal remaking, restructuring, or rebirth of an organization. One example of a transformational mission might be the Army's recent efforts to transform itself into a newer, leaner Army positioned for the 21st century.

This may be a good point to address the confusion over the use of the terms purpose, mission and vision. Collins and Porras view purpose and mission as components of vision. Others, such as Nanus, differentiate between mission and vision. Nanus states, "A vision is not a mission. To state that an organization has a mission is to state its purpose, not its direction." Further complicating the semantics, different organizations may have mission statements, vision statements, purpose statements, or all three. To take one example, Quorum Health Group, Inc. a hospital management company, differentiates between its mission and vision this way:

MISSION STATEMENT

Quorum Health Group, Inc., is a hospital company committed to meeting the needs of clients as an owner, manager, consultant or partner through innovative services that enhance the delivery of quality healthcare.

VISION STATEMENT

Quorum Health Group, Inc., will be valued for its expertise in hospital management and its ability to positively impact the delivery of quality healthcare.

What are the differences between these two statements? Note that the mission is oriented in the present ("QHG is a company . . ."), while the vision is oriented in the future ("QHG will be valued . . ."). The mission states what QHG does in relatively concrete terms (meet the needs of clients by providing services that enhance the delivery of quality healthcare), while the vision states what it wants to do in more idealistic terms (be valued for its expertise and its ability to positively impact the delivery of quality healthcare).

One final example may illustrate how confused mission and vision can become. The Coca-Cola Company in 1994 published a booklet entitled "Our Mission and Our Commitment." In that booklet, the company defines their mission as follows:

We exist to create value for our share owners on a long term basis by building a business that enhances The Coca-Cola Company's trademarks. This also is our ultimate commitment. As the world's largest beverage company, we refresh that world. We do this by developing superior soft drinks, both carbonated and non-carbonated, and profitable non-alcoholic beverage systems that create value for our Company, our bottling partners, and our customers.

Does this sound like a mission, a vision, or a combination of both?

In Collins and Porras's framework for vision, their last element is a vivid description or picture of the end state that completion of the mission represents. They view this as essential to bringing the mission to life. A vivid description gives the mission the ability to inspire and motivate. Look back at the Coca-Cola Company mission shown above. Does it paint a vivid description of completion of the mission, or would The Coca-Cola Company have to amplify the mission statement?

To put this all together, Collins and Porras present the following framework:

core beliefs and values + purpose = guiding philosophy

mission + vivid description = tangible image

guiding philosophy + tangible image = vision

Note that, as opposed to Nanus, Collins and Porras do not focus on a vision statement, but on a vision as consisting of elements shown above. It's worth exploring the properties of a good vision. Nanus has several guidelines for creating a realistic, credible, attractive future for an organization:

PROPERTIES OF A GOOD VISION

A good vision is a mental model of a future state. It involves thinking about the future, and modeling possible future states. A vision doesn't exist in the present, and it may or may not be reached in the future. Nanus describes it like this: "A vision portrays a fictitious world that cannot be observed or verified in advance and that, in fact, may never become reality" (emphasis added). However, if it is a good mental model, it shows the way to identify goals and how to plan to achieve them.

A good vision is idealistic. How can a vision be realistic and idealistic at the same time? One way of reconciling these apparently contradictory properties of a vision is that the vision is realistic enough so that people believe it is achievable, but idealistic enough so that it cannot be achieved without stretching. If it is too easily achievable, it will not set a standard of excellence, nor will it motivate people to want to work toward it. On the other hand, if it is too idealistic, it may be perceived as beyond the reach of those in the organization, and discourage motivation. A realistic vision for that software company might be to maintain their current market share and produce instructional software that meets quality standards. Realistic, yes; but inspiring? No. A realistic yet also idealistic vision might be: To become the industry leader in the development of state-of-the-art instructional software products, known for the quality and the innovativeness of their design.

A good vision is appropriate for the organization and for the times. A vision must be consistent with the organization's values and culture, and its place in its environment. It must also be realistic. For example, in a time of downsizing and consolidation in an industry, a very ambitious, expansionistic vision would not be appropriate. An organization with a history of being conservative, and a culture encouraging conformity rather than risk taking, would not find an innovative vision appropriate. The computer software company mentioned above, with a history of producing high quality instructional software, would not find a vision to become the industry leader in video games or virtual reality software an appropriate one.

A good vision sets standards of excellence and reflects high ideals. Generally, the vision proposed above for the software company does reflect measurable standards of excellence and a high level of aspiration. The actual measure could be the external reputation of the company, as assessed by having users evaluate the company and its products.

A good vision clarifies purpose and direction. In defining that "realistic, credible, attractive future for an organization," a vision provides the rationale for both the mission and the goals the organization should pursue. This creates meaning in workers' lives by clarifying purpose, and making clear what the organization wants to achieve. For people in the organization, a good vision should answer the question, "Why do I go to work?" With a good vision, the answer to that question should not only be, "To earn a paycheck," but also, "To help build that attractive future for the organization and achieve a higher standard of excellence."

A good vision inspires enthusiasm and encourages commitment. An inspiring vision can help people in an organization get excited about what they're doing, and increase their commitment to the organization. The computer industry is an excellent example of one characterized by organizations with good visions. A recent article reported that it is not unusual for people to work 80 hour weeks, and for people to be at work at any hour of the day or night. Some firms had to find ways to make employees go home, not ways to make them come to work! What accounts for this incredible work ethic? It is having a sense of working organizations that are building the future in a rapidly evolving and unconstrained field, where an individual's work makes a difference, and where everyone shares a vision for the future.

A good vision is well articulated and easily understood. In order to motivate individuals, and clearly point toward the future, a vision must be articulated so people understand it. Most often, this will be in the form of a vision statement. There are dangers in being too terse, or too long-winded. A vision must be more than a slogan or a "bumper sticker." Slogans such as Ford Motor Company's "Quality Is Job One" are good marketing tools, but the slogan doesn't capture all the essential elements of a vision. On the other hand, a long document that expounds an organization's philosophy and lays out its strategic plan is too complex to be a vision statement. The key is to strike a balance.

A good vision reflects the uniqueness of the organization, its distinctive competence, what it stands for, and what it is able to achieve. This is where the leaders of an organization need to ask themselves, "What is the one thing we do better than anyone else? What is it that sets us apart from others in our area of business?" A good example of a visioning process refocusing a company on its core competencies is Sears. A few years ago, Sears had expanded into areas far afield from its original business as a retailer. Among other things, Sears began offering financial services at their stores. Poor performance led Sears to realize that they could not compete with financial services companies whose core business was in that area, so they dropped that service and eliminated other aspects of their business not related to retailing. Interestingly, Sears' primary competitor is Wal-Mart, an organization with a very clear and compelling vision. Sam Walton found a niche in providing one stop shopping for people in rural areas, and overwhelmed "Mom and Pop" stores with volume buying and discounting. Wal-Mart is very clear about their vision, and has focused on specific areas where they can be the industry leader. The key is finding what it is that your organization does b

Focus your vision there.

A good vision is ambitious. It must not be commonplace. It must be truly extraordinary. This property gets back to the idea of a vision that causes people and the organization to stretch. A good vision pushes the organization to a higher standard of excellence, challenging its members to try and achieve a level of performance they haven't achieved before. Inspiring, motivating, compelling visions are not about maintaining the status quo.

DEVELOPING A VISION

At this point you should know what a good vision consists of, and recognize a vision statement when you see one. But how does a strategic leader go about developing a vision for an organization? Nanus also offers a few words of advice to someone formulating a vision for an organization:

Learn everything you can about the organization. There is no substitute for a thorough understanding of the organization as a foundation for your vision.

Bring the organization's major constituencies into the visioning process. This is one of Nanus's imperatives: don't try to do it alone. If you're going to get others to buy into your vision, if it's going to be a wholly shared vision, involvement of at least the key people in the organization is essential. "Constituencies," refer to people both inside and outside the organization who can have a major impact on the organization, or who can be impacted by it. Another term to refer to constituencies is "stakeholders"- those who have a stake in the organization.

Keep an open mind as you explore the options for a new vision. Don't be constrained in your thinking by the organization's current direction - it may be right, but it may not.

Encourage input from your colleagues and subordinates. Another injunction about not trying to do it alone: those down in the organization often know it best and have a wealth of untapped ideas. Talk with them!

Understand and appreciate the existing vision. Provide continuity if possible, and don't throw out good ideas because you didn't originate them. In his book about visionary leadership, Nanus describes a seven-step process for formulating a vision:

1. Understand the organization. To formulate a vision for an organization, you first must understand it. Essential questions to be answered include what its mission and purpose are, what value it provides to society, what the character of the industry is, what institutional framework the organization operates in, what the organization's position is within that framework, what it takes for the organization to succeed, who the critical stakeholders are, both inside and outside the organization, and what their interests and expectations are.

2. Conduct a vision audit. This step involves assessing the current direction and momentum of the organization. Key questions to be answered include: Does the organization have a clearly stated vision? What is the organization's current direction? Do the key leaders of the organization know where the organization is headed and agree on the direction? Do the organization's structures, processes, personnel, incentives, and information systems support the current direction?

3. Target the vision. This step involves starting to narrow in on a vision. Key questions: What are the boundaries or constraints to the vision? What must the vision accomplish? What critical issues must be addressed in the vision?

4. Set the vision context. This is where you look to the future, and where the process of formulating a vision gets difficult. Your vision is a desirable future for the organization. To craft that vision you first must think about what the organization's future environment might look like. This doesn't mean you need to predict the future, only to make some informed estimates about what future environments might look like. First, categorize future developments in the environment which might affect your vision. Second, list your expectations for the future in each category. Third, determine which of these expectations is most likely to occur. And fourth, assign a probability of occurrence to each expectation.

5. Develop future scenarios. This step follows directly from the fourth step. Having determined, as best you can, those expectations most likely to occur, and those with the most impact on your vision, combine those expectations into a few brief scenarios to include the range of possible futures you anticipate. The scenarios should represent, in the aggregate, the alternative "futures" the organization is likely to operate within.

6. Generate alternative visions. Just as there are several alternative futures for the environment, there are several directions the organization might take in the future. The purpose of this step is to generate visions reflecting those different directions. Do not evaluate your possible visions at this point, but use a relatively unconstrained approach.

7. Choose the final vision. Here's the decision point where you select the best possible vision for your organization. To do this, first look at the properties of a good vision, and what it takes for a vision to succeed, including consistency with the organization's culture and values. Next, compare the visions you've generated with the alternative scenarios, and determine which of the possible visions will apply to the broadest range of scenarios. The final vision should be the one which best meets the criteria of a good vision, is compatible with the organization's culture and values, and applies to a broad range of alternative scenarios (possible futures).

IMPLEMENTING THE VISION

Now that you have a vision statement for your organization, are you done? Formulating the vision is only the first step; implementing the vision is much harder, but must follow if the vision is going to have any effect on the organization. The three critical tasks of the strategic leader are formulating the vision, communicating it, and implementing it. Some organizations think that developing the vision is all that is necessary. If they have not planned for implementing that vision, development of the vision has been wasted effort. Even worse, a stated vision which is not implemented may have adverse effects within the organization because it initially creates expectations that lead to cynicism when those expectations are not met.

Before implementing the vision, the leader needs to communicate the vision to all the organization's stakeholders, particularly those inside the organization. The vision needs to be well articulated so that it can be easily understood. And, if the vision is to inspire enthusiasm and encourage commitment, it must be communicated to all the members of the organization.

How do you communicate a vision to a large and diverse organization? The key is to communicate the vision through multiple means. Some techniques used by organizations to communicate the vision include disseminating the vision in written form; preparing audiovisual shows outlining and explaining the vision; and presenting an explanation of the vision in speeches, interviews or press releases by the organization's leaders. An organization's leaders also may publicly "sign up" for the vision. You've got to "walk your talk." For the vision to have credibility, leaders must not only say they believe in the vision; they must demonstrate that they do through their decisions and their actions.

Once you've communicated your vision, how do you go about implementing it? This is where strategic planning comes in. To describe the relationship between strategic visioning and strategic planning in very simple terms, visioning can be considered as establishing where you want the organization to be in the future; strategic planning determines how to get there from where you are now. Strategic planning links the present to the future, and shows how you intend to move toward your vision. One process of strategic planning is to first develop goals to help you achieve your vision, then develop actions that will enable the organization to reach these goals.

CONCLUSION

An organization must and can develop a strategic plan that includes specific and measurable goals to implement a vision. A comprehensive plan will recognize where the organization is today, and cover all the areas where action is needed to move toward the vision. In addition to being specific and measurable, actions should clearly state who is responsible for their completion. Actions should have milestones tied to them so progress toward the goals can be measured.

Implementing the vision does not stop with the formulation of a strategic plan - the organization that stops at this point is not much better off than one that stops when the vision is formulated. Real implementation of a vision is in the execution of the strategic plan throughout the organization, in the continual monitoring of progress toward the vision, and in the continual revision of the strategic plan as changes in the organization or its environment necessitate. The bottom line is that visioning is not a discrete event, but rather an ongoing process.

A FORMULA FOR VISIONARY LEADERSHIP

Burt Nanus sums up his concepts with two simple formulas (slightly modified):

STRATEGIC VISION X COMMUNICATION = SHARED PURPOSE

SHARED PURPOSE X EMPOWERED PEOPLE X

APPROPRIATE ORGANIZATIONAL CHANGES X STRATEGIC THINKING =

SUCCESSFUL VISIONARY LEADERSHIP

Each one of the terms places unique and special demands on the strategic leader. If you can put these elements together in an organization, and you have a good vision to start with, you should be well on the way to achieving excellence. Collins and Porras affirm: "The function of a leader-the one universal requirement of effective leadership-is to catalyze a clear and shared vision of the organization and to secure commitment to and vigorous pursuit of that vision."

But it has to be asked and has to be answered.

As Drucker explained,

"Knowledge workers are neither bosses nor workers,

but rather something in between--resources who have responsibility

for developing their most important resource, brainpower,

and who also need to take more control of their own careers."

What exactly does it mean to be aneducated person? The definition of an educated person has changed dramatically over the period of the last century, and this is what Peter Drucker, author of, “The Age of Social Transformation” discusses in his essay.

He believes that an educated person is one “who has learned how to learn, and who continues learning, especially by formal education, throughout his or her lifetime ”

(Drucker 233)

People without this type of education are seen as failures in today’s society. A person with an abundance of knowledge through formal education is usually placed upon a pedestal. This pedestal is signified through occupation (professionals) and status (standards of living). This standard is a set rule each person in society is expected to live up to. Without schooling, an individual is looked down upon and does not receive opportunities to attain that higher position in his/her society. This is a society in which the “common good” of the society is not taken into consideration. Society has become ignorant to the fact that there are individuals in this society that do not have the opportunity to receive a formal education, but does that mean that they cannot acquire knowledge in other ways ?

The Eight New Management Assumptions

Drucker identifies the following new assumptions for the social discipline of management.

1. Management is NOT only for profit-making businesses. Management is the specific and distinguishing organ of any and all organizations.

2. There is NOT only one right organization. The right organization is the organization that fits the task.

3. There is NOT one right way to manage people. One does not "manage" people. The task is to lead people. And the goal is to make productive the specific strengths and knowledge of each individual.

4. Technologies and End-Users are NOT fixed and given. Increasingly, neither technology nor end-use is a foundation of management policy. They are limitations. The foundations have to be customer values and customer decisions on the distribution of their disposable income. It is with those that management policy and management strategy increasingly will have to start.

5. Management's scope is NOT only legally defined. The new assumption on which management, both as a discipline and as a practice, will increasingly have to base itself is that the scope of management is not legal. It has to be operational. It has to embrace the entire process. It has to be focused on results and performance across the entire economic chain.

6. Management's scope is NOT only politically defined. National boundaries are important primarily as restraints. The practice of management - and by no means for business only - will increasingly have to be defined operationally rather than politically.

7. The Inside is NOT the only Management domain. The results of any institution exist ONLY on the outside. Management exits for the sake of the institution's results. It has to start with the intended results and organize the resources of the institution to attain these results. It is the organ that renders the institution, whether business, church, university, hospital or a battered woman's shelter, capable of producing results outside of itself.

8. Management's concern and management's responsibility are everything that affects the performance of the institution and its results - whether inside or outside, whether under the institution's control or totally beyond it.

Managing Oneself

five demands on knowledge-workers

1. They have to ask

:

Who Am I ?

What are my strengths ?

How Do I work ?

2. They have to ask

:

Where do I belong ?

3. They have to ask

:

What is my contribution ?

4. They have to take Relationship Responsibility.

5. They have to plan for the Second Half of their Lives.

Drucker gives this advice for using feedback analysis

1. Concentrate on your strengths.

Place yourself where your strengths can produce performance and results.

2. Work on improving your strenghts.

The feedback analysis shows where to imoprove skills, and get new knowledge.

One can usually get enough skill or knowledge not to be incompetent in it.

For five centuries, our continent has been able to invent the ideas and the goods that have transformed the world, yet it seems to have lost the secret of their manufacture. It no longer knows if it is capable of inventing the world of tomorrow; it doesn't even know if it has a common future any more.

Of the two terms of Schumpeter's formula summarizing capitalism, «creative destruction», we have forgotten the former, that is to say, creation, leaving us only with the latter, destruction. For many, unemployment has become the norm. The hope of becoming a part of society through work has evaporated. Extreme ideologies bloom, though one sole look at the world would be enough to demonstrate the absurdity of all of them. Our societies thought they had built a balance in which every successive generation could legitimately hope that its progeny would have a better life. Today they are convinced that we can no longer keep this promise. Our systems of social negotiations have broken down, and our systems of social protection are threatened. Belief in progress has faded. Many perceive technical progress as a danger, economic progress as a lie, social progress as a mirage, democratic progress as an illusion.

We are living through a turning point, in great confusion. Nothing of what seemed obvious yesterday is evident today. Nor are there any signs to tell us what future certainties will be. The great points of reference -- the Nation, the State, Morality -- seem to have disappeared. The great hopes of tomorrow remain invisible.

We must struggle against this doubt, so devastating for a Europe whose history was built, precisely, upon progress.

When a majority of the population comes to the point of thinking that tomorrow may well be worse than today, the only possible strategy it can see becomes that of preserving what exists. Everyone wants things to remain frozen as they are as long as possible, in order to preserve his own interests, which leads to hampering, preventing, all change. Fear is the greatest ally of conservatives. It feeds the rise of egoism: the social egoism of those who can or believe they can succeed in spite of others or against others; ethnic egoism that rejects the other, whom they consider responsible for all ills; and the national egoism of each individual country persuaded it should prevail over its partners.

So how can one approach tomorrow in a new way?

We must invent a new world.

We must recover the meaning of progress, not progress as an automatic reflex or an empty word, but as an act of will. We must return to the idea that it is possible to act in order to influence things. Never become resigned, never submit, never retreat. We must not see the market as a more effective means of coordinating individual actions. No society can organize itself simply by virtue of the market. Thuswe must be wary of the liberal illusion of a society that has no need to think out its future or define its regulations. On the contrary, it is up to politics to reinvent itself, to define new rules and new institutions.

Many believe that in a so-called global and liberal economy, governments should have no power. They are mistaken. The crisis andreactions to the crisis demonstrate that this is a fallacy, that there exist good policies and bad ones, that there exist good and bad regulations.

We must act in three areas

:

Production, in other words, growth. We must tell ourselves that without growth, there can be no progress and no reduction of inequality.

2. Solidarity, which is a method as much as it is a necessity. There is no progress if it does not profit all and if it is not accepted by all. Solidarity in Europe is not only a part of our glorious past, it is the key to our tomorrow.

3. Public action, for the genius of Europe is first of all that of a collective project and a common destiny.

Production and growth, to begin with, to reach full employment.

That may seem like a utopia, but actually it is not.

The society of full employment we should strive for will not be that of the 60s. It will not be a society without unemployment. But it will be, or it should be, a society in which unemployment is only short-term. A mobile society in which every wage-earner can tell himself he will advance. The contrary of a society where everyone is pigeon-holed to remain in the same profession or at the same rank or level for decades. A society where all of us are perpetually learning or relearning. This implies a radical change in our relation to work and to our crafts and professions.

We must renew our solidarity.

It is the distinctive feature of Europe and of European society. Those who carry the banner of individualism refuse to understand that, in the social contract, we Europeans have a concept much richer than theirs, founded upon the existence of a common good that cannot be reduced to the sum of individual interests. We should be proud of what we have built: adequate medical care available to all, an end to poverty for the aged, solidarity towards those who do not have jobs. An economy more vulnerable to technical change and the appearance of new competition is also harsher. So it demands that those who miss out because of progress can count on the solidarity of those who are benefiting from it.

Finally, we must reinvent public service, public action, that is to say, the role of the State.

What counts is not the amount of taxes paid, it is the comparison between taxes and the quality of public goods and services offered in exchange: education, training, security, roads, railroads, communications infrastructure. It is the State's capacity to favor the creation of wealth, to ensure its just and efficient redistribution, to reduce inequality.

The key principle upon which this project must depend is that of equality. The rise of unprecedented inequality is characteristic of the present day. It is something new, and it has been with us over a sustained period. To borrow Necker's phrase, equality was the very idea of the Revolution. Yet today, the force that is affecting and transforming the world is the development of inequality. And it hasn't slowed down for decades. Inequality between countries, between regions of the world, between social classes, between generations, etc. The result is the dissolution of the feeling of belonging to a common world. A world henceforth undermined by social inequality, the secession of the wealthy, and a revolt of those who feel, conversely, forgotten, despised, rejected or abandoned. And whose sole weapon is their discontent and the power of their indignation.

We must revive what was once the revolutionary plan: equality, in other words, a manner of building society, of producing together, of living together and of breathing life anew into the common good.

As Pierre Rosanvallon put it, it is a question

of refounding a society of equals.

A society in which everyone possesses the same rights, in which each of us is recognized and respected as being as important as the others. A society that allows each one to change his life.

We must also take into consideration the political crisis we are currently experiencing. It is marked not only by political disengagement, abstention, and the rise of extreme ideologies, but also by an institutional crisis. To be more precise, a crisis of the political model. The crisis of the political model is the extreme concentration of power, and in particular the extreme concentration of executive power in the hands of one man, the President of the Republic. The real power of a sole individual versus the actual power of all. It is marked as well by a crisis of decision and a weakened legitimacy of institutions, government, ministers and other authorities.

What is to be done? To undertake a program of institutional reform comparable in its breadth to that of 1958, at the establishment of the 5th Republic. With two main objectives.

To make political decisions more effective and, with this in mind, introduce a dose of proportional representation in elections in order to ensure the best representation possible; reduce by half the number of parliamentary representatives, and outlaw cumulative office; downsize the number of ministers to fifteen, each concentrating on lofty missions of State and thus avoiding the dispersion of public actions, thereby ridding ourselves of that French specificity consisting of incessantly inventing new ministries whose missions are vague but whose uselessness is certain.

Take up the challenge of democratic representation. The historic principle of representation, the idea according to which the people exercise real power through the intermediary of their elected representatives, can only function if we recognize that two principles have proven largely fictitious. The first is the view that a relative or absolute majority represents the opinion of all. The second is that the ballot represents the opinion of the citizen, whereas the rich diversity of an opinion cannot be reduced to the choice of one person at a given time. The result is a legitimate feeling of not being represented. The demand for better representation must be met with more participation, the submission of governments to intensified surveillance, to more frequent rendering of accounts, to new forms of inspection. It is not possible to keep an eye on every decision, but everyone must be entitled to participate in the collective power through a system of evaluation.

This is the price of the construction of a more just and meaningful society.

2. The Austrian School,Jesus Huerta de Soto, in association with the Institute of Economic Affairs , Edward Elgar Publishing, 2008.

Economics Evolving

un libro di

Agnar

Sandmo

Describes the history of economic thought, focusing on the development of economic theory from Adam Smith's "Wealth of Nations" to the late twentieth century. This text examines how important economists have reflected on the sometimes conflicting goals of efficient resource use and socially acceptable income distribution.

Descrizione completa

In clear, nontechnical language, this introductory textbook describes the history of economic thought, focusing on the development of economic theory from Adam Smith's "Wealth of Nations" to the late twentieth century. The text concentrates on the most important figures in the history of economics, from Smith, Thomas Robert Malthus, David Ricardo, John Stuart Mill, and Karl Marx in the classical period to John Maynard Keynes and the leading economists of the postwar era, such as John Hicks, Milton Friedman, and Paul Samuelson. It describes the development of theories concerning prices and markets, money and the price level, population and capital accumulation, and the choice between socialism and the market economy. The book examines how important economists have reflected on the sometimes conflicting goals of efficient resource use and socially acceptable income distribution. It also provides sketches of the lives and times of the major economists. "Economics Evolving" repeatedly shows how apparently simple ideas that are now taken for granted were at one time at the cutting edge of economics research. For example, the demand curve that today's students probably get to know during their first economics lecture was originally drawn by one of the most innovative theorists in the history of the subject. The book demonstrates not only how the study of economics has progressed over the course of its history, but also that it is still a developing science.

For a century and a half, the artists and intellectuals of Europe have scorned the bourgeoisie.And for a millennium and a half, the philosophers and theologians of Europe have scorned the marketplace. The bourgeois life, capitalism, Mencken’s “booboisie” and David Brooks’s “bobos”—all have been, and still are, framed as being responsible for everything from financial to moral poverty, world wars, and spiritual desuetude. Countering these centuries of assumptions and unexamined thinking is Deirdre McCloskey’s The Bourgeois Virtues, a magnum opus that offers a radical view: capitalism is good for us.

McCloskey’s sweeping, charming, and even humorous survey of ethical thought and economic realities—from Plato to Barbara Ehrenreich—overturns every assumption we have about being bourgeois. Can you be virtuous and bourgeois? Do markets improve ethics? Has capitalism made us better as well as richer? Yes, yes, and yes, argues McCloskey, who takes on centuries of capitalism’s critics with her erudition and sheer scope of knowledge. Applying a new tradition of “virtue ethics” to our lives in modern economies, she affirms American capitalism without ignoring its faults and celebrates the bourgeois lives we actually live, without supposing that they must be lives without ethical foundations.

Deirdre McCloksey is a maverick, and in more ways than one.

The big economic story of our times is not the Great Recession. It is how China and India began toembrace neoliberal ideas of economics and attributed a sense of dignity and liberty to the bourgeoisie they had denied for so long. The result was an explosion in economic growth and proof that economic change depends less on foreign trade, investment, or material causes, and a whole lot more on ideas and what people believe.

Or so says Deirdre N. McCloskey in Bourgeois Dignity, a fiercely contrarian history that wages a similar argument about economics in the West. Here she turns her attention to seventeenth- and eighteenth-century Europe to reconsider the birth of the industrial revolution and the rise of capitalism. According to McCloskey, our modern world was not the product of new markets and innovations, but rather the result of shifting opinions about them. During this time, talk of private property, commerce, and even the bourgeoisie itself radically altered, becoming far more approving and flying in the face of prejudices several millennia old. The wealth of