More deeply, this implies that Doug’s revolution requires a reversal of
the current relationship between humans and technology; we need to be
back in the driver’s seat.3 Hayles (1999) locates a similar contradiction in
Norbert Wiener’s work.
AUGMENTING THE INTELLECT: NLS
41
In particular, Engelbart feels we need to create tool systems that help us
deal with knowledge work in a more effective way. This objective is something
he claims he inherited from Bush’s 1945 paper, ‘As We May Think’ (Engelbart
1962), and it formed the basis of the ‘Conceptual Framework for Augmenting
Man’s Intellect’ he would later erect to explain and support the development
of the oN-Line System (NLS), a prototype hypertext system. As he told me:
We need to think about how to boost our collective IQ , and how important it
would be to society because all of the technologies are just going to make our world
accelerate faster and faster and get more and more complex and we’re not equipped
to cope with that complexity.

…

‘If we don’t
get proactive, the human side of [the equation] is going to always get pushed
by just the technology’ (Engelbart 1999). He wanted to build a laboratory
at SRI filled with psychologists and computer scientists to research the new
‘field’ of intellectual augmentation. Initially, though, SRI was skeptical, and
Engelbart was put to work in magnetics.
If the project was to have the kind of impact on the engineering community
he wanted, he needed the support of his peers. Engelbart decided to write a
conceptual framework for it, an agenda that the computing (and engineering)
community could understand.
It was ‘remarkably slow and sweaty work’ (Engelbart 1988, 190), but he
wrote the paper in 1962 and published it in 1963. It is interesting to note
here that both Engelbart and Ted Nelson found writing difficult; they were
far from prolific, and would draft and redraft multiple times. In Nelson’s
case this frustration was largely with the paradigm of paper, and it led
him to design an alternative.8 For Engelbart, it appears to be a frustration
at expressing his ideas in a form that the computing community would
understand.

pages: 578words: 168,350

Scale: The Universal Laws of Growth, Innovation, Sustainability, and the Pace of Life in Organisms, Cities, Economies, and Companies
by
Geoffrey West

SIZE REALLY MATTERS: SCALING AND NONLINEAR BEHAVIOR
In addressing these diverse and seemingly unrelated questions, the lens I shall use will predominantly be that of Scale and the conceptual framework that of Science. Scaling and scalability, that is, how things change with size, and the fundamental rules and principles they obey are central themes that run throughout the book and are used as points of departure for developing almost all of the arguments presented. Viewed through this lens, cities, companies, plants, animals, our bodies, and even tumors manifest a remarkable similarity in the ways that they are organized and function. Each represents a fascinating variation on a general universal theme that is manifested in surprisingly systematic mathematical regularities and similarities in their organization, structure, and dynamics. These will be shown to be consequences of a broad, big-picture conceptual framework for understanding such disparate systems in an integrated unifying way, and with which many of the big issues can be addressed, analyzed, and understood.

…

The examples shown in Figures 1–4 are just a tiny sampling of an enormous number of such scaling relationships that quantitatively describe how almost any measurable characteristic of animals, plants, ecosystems, cities, and companies scales with size. You will be seeing many more of them throughout this book. The existence of these remarkable regularities strongly suggests that there is a common conceptual framework underlying all of these very different highly complex phenomena and that the dynamics, growth, and organization of animals, plants, human social behavior, cities, and companies are, in fact, subject to similar generic “laws.”
This is the main focus of this book. I will explain the nature and origin of these systematic scaling laws, how they are all interrelated, and how they lead to a deep and broad understanding of many aspects of life and ultimately to the challenge of global sustainability.

…

This book is about a way of thinking, about asking big questions, and about suggesting big answers to some of those big questions. It’s a book about how some of the major challenges and issues we are grappling with today, ranging from rapid urbanization, growth, and global sustainability to understanding cancer, metabolism, and the origins of aging and death, can be addressed in an integrated unifying conceptual framework. It is a book about the remarkably similar ways in which cities, companies, tumors, and our bodies work, and how each of them represents a variation on a general theme manifesting surprisingly systematic regularities and similarities in their organization, structure, and dynamics. A common property shared by all of them is that they are highly complex and composed of enormous numbers of individual constituents, whether molecules, cells, or people, connected, interacting, and evolving via networked structures over multiple spatial and temporal scales.

There were no longer existing assets for Silverstein to manage, but he would continue to receive these fees as provided for in the December 2003 agreement between the PA and the Silverstein lessees. At this point, Silverstein and his company executives were managing what is called “the pre-development” process.
Within twenty-four hours the Port Authority sent back a revised Conceptual Framework, and in his response letter of April 26, “accepting all of your key terms,” Silverstein wrote, “We understand each other’s positions clearly, including the need for reasonable certainty of completion and financeability.” He wanted to proceed immediately, he said, toward the documentation of the transaction rather than focus any longer on the Conceptual Framework and its seventeen specific elements.42
The Port Authority commissioners were tentatively scheduled to consider approval of the transaction in September, and the amount of work to be done before then was huge.

…

Furthermore, the status of insurance claims under Silverstein’s insurance policies had to be clarified, and this was not as straightforward as might have been expected because less than a month after the April Conceptual Framework agreement, seven of the insurers attempted to use the new financial plan as a lever to delay, if not evade, their responsibility to pay the full $4.6 billion for rebuilding. Those insurance policies, as the Times editorialized, “have provoked enough charges, countercharges and court documents to fill a bank vault.”46 Another item yet to be definitively settled was whether Silverstein could get rights to the retail portion of the site, rights that were subject to a resolution clarifying the intent of the PA and Westfield America, which now held a right of first offer to redevelop the retail spaces. From the twelve-page Conceptual Framework, the realignment transaction would balloon into twelve volumes of agreements, hundreds of thousands of words diligently documenting the complex duality of an intergovernmental political arrangement and hard-fought business deal.

…

No wonder officials aren’t calling it a deal, but a ‘conceptual framework’. ” The Times, more moderate in tone, also condemned “how badly blocked the redevelopment effort in Lower Manhattan has been in the last two years,” though from a different perspective: “There have been many obstructions, including Mr. Silverstein himself. After months of negotiations, the mayor and the governors of New York and New Jersey lined up solidly against the developer last week and offered him two good options and one solid deadline. Mr. Silverstein did the right thing by accepting one of these options, which should turn out to be a profitable deal, not only for his own company, but also for the public.” The amount of money the public was committing to make the Conceptual Framework a reasonable thrust at rebuilding—and ultimately would have to pump into Ground Zero—made the Times’ last concluding phrase suspect from a financial perspective.

In contrast, most of the rules presented in Chapter 4 and Chapter 5 (particularly those that deal with media types and representational forms) are my solutions in the absence of consensus.
Note
When used in the context of rules, the key words: “must,” “must not,” “required,” “shall,” “shall not,” “should,” “should not,” “recommended,” “may,” and “optional” are to be interpreted as described in RFC 2119.[16]
WRML
I’ve invented a conceptual framework called the Web Resource Modeling Language (WRML) to assist with the design and implementation of REST APIs. WRML, pronounced like “wormle,” originated as a resource model diagramming technique that uses a set of basic shapes to represent each of the resource archetypes discussed in Resource Archetypes. The scope of WRML increased with the creation of the application/wrml media type,[17] which has pluggable format and schema components, as described in Media Type Design.

…

Key words for use in RFCs to Indicate Requirement Levels, RFC 2119, RFC Editor, 1997 (http://www.rfc-editor.org/rfc/rfc2119.txt).
[17] The application/wrml media type’s IANA registration is pending—see http://www.wrml.org for the most up-to-date information.
[18] http://www.json.org
Recap
This chapter presented a synopsis of the Web’s invention and stabilization. It motivated the book’s rule-oriented presentation and introduced WRML, a conceptual framework whose ideas promote a uniform REST API design methodology. Subsequent chapters will build on this foundation to help us leverage REST in API designs. Table 1-1 summarizes the vocabulary terms that were introduced in this chapter.
Table 1-1. Vocabulary review
TermDescription
Application Programming Interface (API)
Exposes a set of data and functions to facilitate interactions between computer programs.

…

Tim Berners-Lee developed the first one, which was able to view and edit HTML documents.
Web client (client)
A computer program that follows REST’s uniform interface in order to accept and transfer resource state representations to servers.
Web component (component)
A client, network-based intermediary, or server that complies with REST’s uniform interface.
Web Resource Modeling Language (WRML)
A conceptual framework whose ideas can be leveraged to design and implement uniform REST APIs.
Web server (server)
A computer program that follows REST’s uniform interface constraints in order to accept and transfer resource state representations to clients.
Web service
A web server programmed with specific, often reusable, logic.
Chapter 2. Identifier Design with URIs
URIs
REST APIs use Uniform Resource Identifiers (URIs) to address resources.

Darwin knew nothing about genes when he developed the theory of evolution through natural selection. Mendel knew nothing about DNA when, in an Austrian monastery garden, he developed his idea of inherited factors that are transmitted ‘true’ from generation to generation of peas. It doesn’t matter. They saw what nobody else had seen and suddenly we all had a new way of viewing the world.
The epigenetic landscape
Oddly enough, there was a conceptual framework that was in existence when John Gurdon performed his work. Go to any conference with the word ‘epigenetics’ in the title and at some point one of the speakers will refer to something called ‘Waddington’s epigenetic landscape’. They will show the grainy image seen in Figure 1.1.
Conrad Waddington was a hugely influential British polymath. He was born in 1903 in India but was sent back to England to go to school.

…

We no longer accept most of his ideas scientifically, but we should acknowledge that he was making a genuine attempt to address important questions. Inevitably, and quite rightly, Lamarck has been overshadowed by Charles Darwin, the true colossus of 19th century biology – actually, probably the colossus of biology generally. Darwin’s model of the evolution of species via natural selection has been the single most powerful conceptual framework in biological sciences. Its power became even greater once married to Mendel’s work on inheritance and our molecular understanding of DNA as the raw material of inheritance.
If we wanted to summarise a century and a half of evolutionary theory in one paragraph we might say:
Random variation in genes creates phenotypic variation in individuals. Some individuals will survive better than others in a particular environment, and these individuals are likely to have more offspring.

…

But females randomly inactivate an X chromosome in all their cells. Consequently, at a very fundamental level, all cells in a female body can be split into two camps depending on which X chromosome they inactivated. The expression for this is that females are epigenetic mosaics.
This sophisticated epigenetic control in females is a complicated and highly regulated process, and that’s where Mary Lyon’s predictions have provided such a useful conceptual framework. They can be paraphrased as the following four steps:
Counting: cells from the normal female would contain only one active X chromosome;
Choice: X inactivation would occur early in development;
Initiation: the inactive X could be either maternally or paternally derived, and the inactivation would be random in any one cell;
Maintenance: X inactivation would be irreversible in a somatic cell and all its descendants.

For example, performing artists only have a maximum of 365 nights a year on which they can do a show, and can’t become more “productive.” Nurses become arguably less, not more, productive in a meaningful sense if they treat more patients but the statistics work the opposite way. In the online economy, digital products can show infinite productivity—they can be duplicated essentially for free—but if they’re priced for free, they will perhaps not be produced in the desirable quantities. In these varied examples, the conceptual framework of measurement isn’t up to assessing the things we value (in a noneconomic sense), which in turn actually makes it hard to value them in the monetary sense.
This leads directly to a second requirement, which is clarity about the values and aims of economic policy and political choices. There is a fundamental set of trade-offs—a “trilemma,” or three-way dilemma—in the management of the economy—using resources as efficiently as possible, sharing them fairly between people, and allowing people as much freedom and self-determination as possible—and it is only possible to hit two of these three aims at any one time.

…

These increasing returns activities are growing in their extent, as seen in the growth of goods and research-intensive goods as a share of the leading economies’ annual output.
The more you think about the measurement problems, the more it becomes clear that the difficulty in measuring is a result not just of failing to collect the right statistics but is actually due to the way we think about such activities. The conceptual framework that lies behind existing economic statistics is a bad fit for an economy that is no longer mass-producing standardized manufactured goods. The structure of the economy is changing, and so is what people value. This is true both in the sense of what they’ll spend their money on in the weightless economy and in the sense of a growing appreciation of the legacy of today’s economy for tomorrow’s society.

…

So, bizarrely, many previously highly profitable businesses are looking quite a lot like public services in some respects. The zero marginal cost of conveying the song or movie to another user makes it harder to charge anything, but that in turn is undermining the provision of the service. Many businesses are scrabbling to find what it is they can charge for in order to cover their costs and sustain profit margins.
This all points toward the conclusion that our conceptual framework for understanding economic value hasn’t kept up with the way the economy has changed.
INNOVATION IN STATISTICS
Economists and statisticians certainly understand the problem. Questions of measurement have not only reached the public policy debate, they have been explored extensively within the profession.
One type of innovation has been the challenge to the monopoly of GDP over policy debates, and the development of either alternative or supplementary indicators.

The opponents voice concern about major
changes in current system states by focusing on complexity
and on normative values-in other words, their discussion is
at Level II (if not Level III). Proponents and opponents both
err in failing to recognize that both positions may be at once
valid yet incommensurable. Any technology of more than trivial import exhibits behaviors at Levels I and II (and beyond,
as we will discuss next), and these behaviors are unavoidable
and symbiotic. But more profoundly, by relying on simplistic,
anachronistic, and contradictory conceptual frameworks, both
sides reinforce concepts and frameworks-Enlightenment certainties-that are not capable of engaging the radical technological transitions that we humans are continually creating. It
is to these transitions that we now turn.
4
Level III Technology: Radical
Contingency in Earth Systems
We have explored two levels of technology. At the shop-floor
level, we can see much of the cause-and-ef£ect chain necessary
to meet specific and well-defined social goals: a vaccine prevents a particular disease, or a well-designed manufacturing
process eliminates the use oftoxic chemicals (and thus workers'
potential exposure to them).

…

Similarly, the Counter Rocket Artillery Mortar
(CRAM) system is a computerized machine-gun system currently deployed to defend against rockets and missiles to which
humans could not react in time. 9 Does the word "robot" signify a type of artifact, a type of capability, or a certain level of
computational competence? Or does this discussion make the
point that emerging technologies render dangerously contingent even words and concepts that we think we understand,
Killer Apps
151
and thus leave us open to the risk of relying on implicit assumptions, discussions, and conceptual frameworks that are
already obsolete?
Lethal autonomous robots (LARs) are controversial at every
level, but discussion to date is often characterized by category
confusion.
To begin with, why should such a technology, in any form,
be deployed? The immediate response is Level I: "to save soldiers' lives." Indeed, robots, whether lethal and autonomous
or simply robotic, do save the lives of soldiers: in Iraq and Afghanistan, many explosive devices that might otherwise have
killed and maimed soldiers have been identified and eliminated
by robots.

…

It is
illegal for drug companies to advertise an "off-label" use, but it is legal
for a physician to prescribe a drug for "off-label" use. It is, however,
illegal to sell or trade in "off-label" drugs that have not been specifically prescribed for the individual.
3. This language was on the original website of the World Transhumanist Association, www.transhumanism.org.In 2008, that site was
replaced by www.humanityplus.org.
4. The Great Chain of Being is a conceptual framework of the Universe perfected in the Christian medieval period in Europe. It envisions a structured hierarchy, with pure spirit and perfection (God)at
the top, and pure matter and imperfection (rocks and other materials)
at the bottom. In between, in order, come all things; angels are next to
God, while plants are above matter, and beasts above plants. Humans
are at the fulcrum point, for they are both spirit and matter.

.*
1530
Paracelsus pioneers the application of chemistry to physiology and pathology
1543
Nicolaus Copernicus’ De revolutionibus orbium coelestium states the heliocentric theory of the solar system
Andreas Vesalius’ De humani corporis fabrica supplants Galen’s anatomical textbook
1546
Agricola’s De natura fossilium classifies minerals and introduces the term ‘fossil’
1572
Tycho Brahe records the first European observation of a supernova
1589
Galileo’s tests of falling bodies (published in De motu) revolutionize the experimental method
1600
William Gilbert’s De magnete, magnetisque corporibus describes the magnetic properties of the earth and electricity
1604
Galileo discovers that a free-falling body increases its distance as the square of the time
1608
Hans Lippershey and Zacharias Jansen independently invent the telescope
1609
1609 Galileo conducts the first telescopic observations of the night sky
1610
Galileo discovers four of Jupiter’s moons and infers that the earth is not at the centre of the universe
1614
John Napier’s Mirifici logarithmorum canonis descriptio introduces logarithms
1628
William Harvey writes Exercitatio anatomica de motu cordis et sanguinis in animalibus, accurately describing the circulation of blood
1637
René Descartes’ ‘La Géométrie’, an appendix to his Discours de la méthode, founds analytic geometry
1638
Galileo’s Discorsi e dimonstrazioni matematiche founds modern mechanics
1640
Pierre de Fermat founds number theory
1654
Fermat and Blaise Pascal found probability theory
1661
Robert Boyle’s Skeptical Chymist defines elements and chemical analysis
1662
Boyle states Boyle’s Law that the volume occupied by a fixed mass of gas in a container is inversely proportional to the pressure it exerts
1669
Isaac Newton’s De analysi per aequationes numero terminorum infinitas presents the first systematic account of the calculus, independently developed by Gottfried Leibniz
1676
Antoni van Leeuwenhoek discovers micro-organisms
1687
Newton’s Philosophiae naturalis principia mathematica states the law of universal gravitation and the laws of motion
1735
Carolus Linnaeus’ Systema naturae introduces systematic classification of genera and species of organisms
1738
Daniel Bernoulli’s Hydrodynamica states Bernoulli’s Principle and founds the mathematical study of fluid flow and the kinetic theory of gases
1746
Jean-Etienne Guettard prepares the first true geological maps
1755
Joseph Black identifies carbon dioxide
1775
Antoine Lavoisier accurately describes combustion
1785
James Hutton’s ‘Concerning the System of the Earth’ states the uniformitarian view of the earth’s development
1789
Lavoisier’s Traité élémentaire de chimie states the law of conservation of matter
By the mid-1600s this kind of scientific knowledge was spreading as rapidly as had the doctrine of the Protestant Reformers a century before. The printing press and increasingly reliable postal services combined to create an extraordinary network, small by modern standards, but more powerful than anything previously achieved by a community of scholars. There was of course a great deal of intellectual resistance, as is always the case when the paradigm – the conceptual framework itself – shifts.28 Indeed, some of this resistance came from within. Newton himself dabbled in alchemy. Hooke all but killed himself with quack remedies for indigestion. It was by no means easy for such men to reconcile the new science with Christian doctrine, which few were ready to renounce.29 But it remains undeniable that this was an intellectual revolution even more transformative than the religious revolution that preceded and unintentionally begat it.

…

As the United Nations Climate Change Conference in Copenhagen in December 2009 made clear, rhetorical pleas to ‘save the planet’ for future generations are insufficient to overcome the conflicts over economic distribution between rich and poor countries that exist in the here and now. We love our grandchildren. But our great-great-grandchildren are harder to relate to.
Yet it is possible that this whole conceptual framework is, in fact, flawed. Perhaps Cole’s artistic representation of a civilizational supercycle of birth, growth and eventual death is a misrepresentation of the historical process. What if history is not cyclical and slow-moving but arrhythmic – sometimes almost stationary, but also capable of violent acceleration? What if historical time is less like the slow and predictable changing of the seasons and more like the elastic time of our dreams?

…

., The Pursuit of Power: Technology, Armed Force and Society since AD 1000 (Chicago, 1982)
———, The Rise of the West: A History of the Human Community (Chicago, 1991 [1963])
Maddison, Angus, The World Economy: A Millennial Perspective (Paris, 2001)
Melko, Matthew, The Nature of Civilizations (Boston, 1969)
Matthews, Derek, ‘The Strange Death of History Teaching (Fully Explained in Seven Easy-to-Follow Lessons’, unpublished pamphlet (January 2009)
Morris, Ian,Why the West Rules – For Now: The Patterns of History, and What They Reveal About the Future (New York, 2010)
Mumford, Lewis, The City in History (New York, 1961)
Murray, Charles A., Human Accomplishment: The Pursuit of Excellence in the Arts and Sciences, 800 B.C. to 1950 (New York, 2003)
North, Douglass C., Understanding the Process of Economic Change (Princeton, 2005)
———, John Joseph Wallis and Barry R. Weingast,Violence and Social Orders: A Conceptual Framework for Interpreting Recorded Human History (Cambridge, 2009)
Osborne, Roger, Civilization: A New History of the Western World (New York, 2008)
Pomeranz, Kenneth, The Great Divergence: China, Europe and the Making of the Modern World Economy (Princeton, 2000)
Putterman, L. and David N. Weil, ‘Post-1500 Population Flows and the Long Run Determinants of Economic Growth and Inequality’, working paper (September 2008)
Quigley, Carroll, The Evolution of Civilizations (New York, 1961)
Rajan, Raghuram G. and Luigi Zingales, ‘The Persistence of Underdevelopment: Institutions, Human Capital, or Constituencies?’

Nevertheless, as soon as the war ended, they became the basis
for a series of massive military research projects, including the Intercontinental Ballistic Missile, the SAGE air defense system, and the Polaris Intermediate Range Missile. All of these projects depended heavily on computers, on interdisciplinary and interinstitutional collaborations, and on a
systems approach to engineering.44 Over the next twenty years, cybernetics
and systems theory more generally provided a rhetoric and a conceptual
framework with which to link the activities of each of these actors to the
others and to coordinate their work as a whole.
The power of cybernetics and systems theory to facilitate interdisciplinary collaboration emerged in large part thanks to the entrepreneurship of
Norbert Wiener and the research climate of World War II. Wiener did not
create the discipline of cybernetics out of thin air; rather, he pulled its analytical terms together by bridging multiple, if formerly segregated, scientiﬁc
communities.

…

[Computer] intelligence should be directed toward instructing [the user], demystifying and exposing its own nature, and ultimately giving him active control.”25
The concept of building a peer-to-peer information system and the idea
that individuals needed to gain control over information and information
systems had been features of both the New Communalist movement and
the New Left for some time. Yet, the notion of doing these things with computers was relatively new, at least outside the walls of SRI and Xerox PARC.
For those who hoped to turn computing machines toward populist ends,
the religion of technology espoused by the Whole Earth Catalog offered an
important conceptual framework and source of legitimation. In the early
1970s, for example, Lee Felsenstein began to design the Tom Swift Terminal—a freestanding, easy-to-use terminal that would be as easy to repair
as a radio. Although it was never built precisely to Felsenstein’s ﬁrst
speciﬁcations, the Tom Swift Terminal design ultimately drove the creation
of an early personal computer known as the Sol. Felsenstein envisioned the
Tom Swift Terminal “as something that could be printed in the Whole Earth
Catalog.”

…

He explained in the ﬁrst issue that the magazine took its name from the biological theory of “coevolution,” in which two species evolved symbiotically. Brand traced the origin of this idea to a 1965 study of the relationship
between certain predatory caterpillars and the plants they ate, conducted by
his old teacher Paul Ehrlich and Peter Raven.38 The ﬁrst issue of CQ prominently featured an article by Ehrlich outlining his conceptual framework,
entitled “Coevolution and the Biology of Communities.” Yet, Brand considered coevolution to be more than a biological theory. It was a metaphor—
derived from and carrying the legitimacy of science—for a new way of life.
That metaphor depended not so much on Brand’s reading of contemporary
biology as it did on his reading of the mystical cybernetics of a former
anthropologist, psychiatrist, and biological researcher, Gregory Bateson.

For this reason, we’ll ground our discussion in concrete examples in the mold of the Polymath Project and Kasparov versus the World. In part 1 of this book we’ll use these concrete examples to distill a set of principles that explain how online tools can amplify collective intelligence.
I have deliberately focused the discussion in part 1 on a relatively small number of examples, with the idea being that as we develop a conceptual framework for understanding collective intelligence, we’ll revisit each of these examples several times, and come to understand them more deeply. Furthermore, the examples come not just from science, but also from areas such as chess and computer programming. The reason is that some of the most striking examples of amplifying collective intelligence—examples such as Kasparov versus the World—come from outside science, and we can learn a great deal by studying them.

…

In this chapter we’ll see that it’s this ability to restructure expert attention that is at the heart of how online tools amplify collective intelligence. What examples such as InnoCentive, the Polymath Project, and Kasparov versus the World share is the ability to bring the attention of the right expert to the right problem at the right time. In the first half of the chapter we’ll look in more detail at these examples, and develop a broad conceptual framework that explains how they restructure expert attention. In the second half of the chapter we’ll apply that framework to understand how online collaborations can work together in ways that are essentially different from offline collaborations.
Harnessing Latent Microexpertise
While the ASSET-InnoCentive story is striking, Kasparov versus the World is an even more impressive example of collective intelligence.

Even his colleagues had their doubts. A friend told him at one point, “You know, if people really get to know you, it’s one thing. But otherwise, you sound just like all the other charlatans.”
He had difficulties getting his ideas across to people throughout his career, but Engelbart persisted. By October 1962, he had sketched out his vision in a summary report for the air force entitled “Augmenting Human Intellect: A Conceptual Framework,” and the following year he condensed his ideas into a chapter in a collection titled Vistas in Information Handling. His “framework” was both a technological and organizational prescription for creating computer-equipped teams of people who could more efficiently work on a broad range of human problems. Augment was thus the personal computer and the Internet rolled into one.
In an effort to communicate the power of augmentation to his audiences, Engelbart occasionally relied on the concept of deaugmentation, an approach that was inspired by the same insight that underlay the original scaling ideas that he had come across in his days working around the NACA wind tunnels.

Berkeley: University of California Press, 2002.
Cowan, Ruth Schwartz. A Social History of American Technology. New York: Oxford University Press, 1997.
Coyote, Peter. Sleeping Where I Fall: A Chronicle. Washington, D.C.: Counterpoint, 1998.
Edwards, Paul N. The Closed World: Computers and the Politics of Discourse in Cold War America. Cambridge, Mass.: MIT Press, 1996.
Engelbart, Douglas C. Augmenting Human Intellect: A Conceptual Framework. Director of Information Sciences, Air Force Office of Scientific Research, 1962.
Evans, Christopher Riche. The Micro Millennium. New York: Viking Press, 1980.
Farber, David R. The Sixties: From Memory to History. Chapel Hill: University of North Carolina Press, 1994.
———. The Age of Great Dreams: America in the 1960s. New York: Hill and Wang, 1994.
Flamm, Kenneth. Creating the Computer: Government, Industry, and High Technology.

But what would you see if you could follow a single reactive molecule, slowed down enough to enable the eye to follow its motions? You might not want to try, because to have good odds of seeing a reaction occur, you’d have to watch for about a billion minutes, which is to say, about two thousand years.
Thus, the scaled view that gives such powerful insights into the behavior of rigid nanomachines is essentially useless for understanding solution-phase chemistry. Chemists must work in a different conceptual framework, one seldom concerned with mechanical speed, force, or position.*
Thermal energy also dominates motions in biomolecular nanomachines, devices which straddle the gap between the chaos of solution-phase chemistry and the orderly motion of gears, bearings, shafts, and the rest. Many biomolecular components are made of protein, which makes proteins worth a closer look.
To imagine a protein at work in water, first make the water molecules transparent to clear away the surrounding solution-phase blur.

…

Now we are ready to ask what these capabilities imply and what they will enable. We need to start at the bottom, exploring the performance of extraordinary materials in extraordinary forms, then move up to higher levels: components, products, applications, and costs (in the broadest, yet physical sense of the word). The results of this exploration will offer a concrete view of the potential products of radical abundance, providing a conceptual framework for considering its implications. The resulting picture combines radically low-cost production—in terms of labor, capital, materials, energy, and environmental impact—with products that themselves can be radically better in performance, efficiency, and cost of use.
ASKING THREE FUNDAMENTAL QUESTIONS
“What can be made?”
“What can it do?”
“How much will it cost to produce?”
These are the fundamental questions that any vision of future technologies must answer.

…

We’ve seen the emergence of a gift economy in digital products such as software, text, images, and video; the natural course of events would see this pattern extend to APM product-design files, leading (aside from the cost of input materials) to a gift economy in physical objects (but within what mandated constraints?). Considering both similarities and contrasts between the two revolutions can help to build a more robust conceptual framework.
Regarding specific problems (a plunge in demand for steel, for example), it’s natural to worry that they might be neglected, simply out of distraction and inertia. In a world in which coherent scenarios have traction, however, specific problems will often be framed as instances of a broader, generic, high-profile problem—steel, after all, won’t be a special case of falling demand.
Likewise, concerns about particular hazards of unconstrained applications of APM—this material, that device—won’t arise in isolation.

Similarly, in an anguished review of “extremism” and its ascendance, New York Times Israel correspondent Thomas Friedman includes under this rubric those who advocate a non-racist settlement in accord with the international consensus, while the Western leaders of the rejectionist camp, who also hold a commanding lead in terrorist operations, are the “moderates”; by definition, one might add. Friedman writes that “Extremists have always been much better at exploiting the media.” He is quite right; Israel and the U.S. have shown unparalleled mastery of this art, as his own articles and news reports indicate.9 His convenient version of history and the conceptual framework of his reporting, as just illustrated, provide a few of the many examples of the success of extremists in “exploiting the media”—now using the term in its literal sense.
In adopting a conceptual framework designed to exclude comprehension of the facts and issues, the Times follows the practice of Israeli models such as Rabin, who achieve the status of “moderates” by virtue of their general conformity to U.S. government demands. It is, correspondingly, entirely natural that when Friedman reviews “Two Decades of Seeking Peace in the Mideast,” major proposals rejected by the U.S. and Israel are omitted as inappropriate for the historical record.

…

One stable feature is the Churchillian doctrine: the rich and powerful have every right to demand that they be left in peace to enjoy what they have gained, often by violence and terror; the rest can be ignored as long as they suffer in silence, but if they interfere with the lives of those who rule the world by right, the “terrors of the earth” will be visited upon them with righteous wrath, unless power is constrained from within.
The first five chapters below are concerned with the first phase of the “war on terror,” during the Reagan–Bush (No. 1) Administrations. The preface and the first three chapters constitute the original publication: Pirates and Emperors (Claremont, 1986). Chapter 1 is devoted to the conceptual framework in which these and related issues are presented within the reigning doctrinal system. Chapter 2 provides a sample—only a sample—of Middle East terrorism in the real world, along with some discussion of the style of apologetics employed to ensure that it proceeds unhampered. Chapter 3 turns to the role played by Libya in the doctrinal system during those years. Chapter 4 appears in the 1987 edition of Pirates and Emperors (Black Rose, Montreal); it is a transcript of a keynote address at the Arab Association of University Graduates Convention, November 15, 1986.

My hope for more complex and honest debate may sound too wishful, but I was struck in our lengthy interview by Rubin’s willingness to discuss contrary propositions, and by his disarmingly self-effacing and reflective manner (the transcript is posted at www.thenation.com). Several times, I was taken aback when his comments made tentative concessions to the opposition’s argument. He even endorsed, though only in broad principle, some objectives for reforming global trade that his critics have long advocated.
I suggest that reformers test his sincerity. In the same spirit, they might try to initiate a conversation about what Rubin calls the “conceptual framework” for reform. He says he would welcome the discussion.
The Hamilton Project’s early policy output, I concede, doesn’t encourage a belief that reasoned dialogue with dis-senters is what Rubin has in mind. Advisory board members see themselves as progressive-minded, but they do not stray from the mainstream’s conventional wisdom—lots of Harvard, Princeton and Berkeley, no one from the ranks of “free trade”
skeptics.

…

It’s an imposition of one on the other.” This is a startling statement: The man from Citigroup has articulated the essential reasoning that makes the case for including labor rights in the global trading system.
That conversation has convinced me that outgunned reformers ought to make use of Rubin’s musings. Knock on his door and try to initiate a dialogue. If the critics come forward and offer their ideas on a “conceptual framework” for reform, I ask, would the Hamilton Project be willing to discuss them?
Rubin reiterates his doubts and reservations. “But the answer is yes,” he says. “The answer is absolutely yes.” Skeptical friends and kindred spirits will probably say to me, You have been conned. I would say back to them, What have you got to lose by talking to the man?
The Hamilton Project is a sophisticated example of what I call “deep lobbying”—developing well in advance of the 2008
presidential election an agenda that safely avoids critical challenges to the global system and defines the terms of debate in very limiting ways.

Barnett, Mi Feng, and Xaioqu Luo, “Social Identity, Market Memory, and First-Mover Advantage,” Industrial and Corporate Change 22, no. 3 (2012): 585.
116 Fast followers, on the other hand: Reasons for the second-wave advantage were proposed by researchers from Texas A&M only a few years after Lieberman and Montgomery’s initial paper on first movers. “Major shifts in technology for which the first mover is ill-prepared because of its investment in old technology may favor the fast follower that is not burdened with such investments,” write Roger A. Kerin, P. Rajan Varadarajan, and Robert A. Peterson, “First-Mover Advantage: A Synthesis, Conceptual Framework, and Research Propositions,” Journal of Marketing 56, no. 4 (1992): 33–52. “Later entrants’ access to relatively newer cost-efficient technologies enables them to offset or neutralize the first mover’s experience-based cost advantages.”
116 Many of the biggest corporate successes: Steve Blank, writing in Business Insider, makes one of the best-formed arguments on second-wave advantage out there: “You’re Better Off Being a Fast Follower than an Originator,” Business Insider, October 5, 2010, http://www.businessinsider.com/youre-better-off-being-a-fast-follower-than-an-originator-2010-10 (accessed February 16, 2014).
117 The way to predict the best waves: Fernando F.

And now, at present, there is a resurgence of
cooperative currencies and other innovations as the shadow of recession looms, but the dire consequences and tough lessons from these
experiences seem to have lapsed from memory.
Back in the day when Michael Untergugenberger was elected mayor
of Wörgl, in 1931, some 30 percent of the workforce was unemployed,
leaving 200 families absolutely penniless. The mayor-with-the-longname, as the renowned U.S. economist Irving Fisher from Yale would
call him, was familiar with Silvio Gesell’s work. A German economist
and merchant, Gesell’s conceptual framework for demurrage and for
other theories made him, some argue, the grandfather of modern-day
cooperative currencies. His monetary designs are often referred to as
freigelt, or “free money” in English.
The mayor decided to put Gesell’s ideas to the test, as there was
much to be done around the town and many willing and able-bodied
folks looking for work. The rub was, however, that there were only
40,000 Austrian shillings remaining in the bank, just enough to pay
the salaries of a couple of dozen people for one month, far short of
what was needed.

His instant answer was statistics, because the ability to understand data would be the most powerful skill in the twenty-first century.
The rise of the alpha geeks means the 1 percent is more fiercely educated and the returns on elite education are higher than ever before. One way to understand why we are living in a golden age of the nerds is with a metaphor invented by Jan Tinbergen, joint winner of the first Nobel Prize in economics: the race between education and technology. That idea is the title of and conceptual framework for a recent book by Larry Katz and Claudia Goldin, the Harvard pair who study how the interplay between new technologies and education shapes income distribution.
In the nineteenth century, as the first gilded age was reaching its peak, technology raced ahead of education. As a result, if you were what counted as highly educated in that age—which was finishing high school (remember, bestselling author Henry George left school at fourteen)—you could command a premium compared to unskilled workers.

…

The pivot is about recognizing when you are on the wrong track and changing course—and that, too, is central to Soros’s ability to respond to revolution.
Chanos, who leased office space from Soros’s Quantum Fund in midtown New York between 1988 and 1991, agrees. “One thing that I’ve both wrestled with and admired that Soros conquered many years ago is the ability to go from long to short, the ability to turn on a dime when confronted with the evidence. Emotionally that is really hard.”
“My conceptual framework, which basically emphasizes the importance of misconceptions, makes me extremely critical of my own decisions,” Soros told me. “I reexamine them all the time and recognize when I am on the wrong track. . . . I know that I’m bound to be wrong and therefore am more likely to correct my mistakes.”
“It’s an almost aggressive pessimism about his own ideas, that he is going to be the first person to find out what’s wrong with his theory, rather than what’s right with his theory,” his son Jonathan told me.

…

the ability to “pivot” Caroline O’Connor and Perry Klebahn, “The Strategic Pivot: Rules for Entrepreneurs and Other Innovators,” Harvard Business Review blog network, February 28, 2011.
Flickr’s genesis was in 2002 See Jessica Livingston, Founders at Work: Stories of Startups’ Early Days (Apress, 2007), pp. 257–264.
“One thing that I’ve both wrestled with” Chrystia Freeland, “The Credit Crunch According to Soros,” Financial Times, January 30, 2009.
“My conceptual framework, which basically emphasizes” CF interview with George Soros, December 16, 2008.
“It’s an almost aggressive pessimism about his own ideas” CF interview with Jonathan Soros, July 14, 2009.
“The businesses and institutions underpinning” Jennings, “Opportunities of a Lifetime.”
“The group of winners is churning at an increasing and rapid rate” “Measuring the Forces of Long-Term Change: The 2009 shift index”, Deloitte Center for the Edge, December 2009, p. 115.

Furthermore, precise probabilistic inference would be implemented
as a special collection of rules running on top of the underlying NARS inference engine
in roughly the same manner that programs may run on top of an operating system.
Following up on the uncertain-logic theme, the next chapter by Stephan Vladimir
Bugaj and Ben Goertzel moves this theme into the domain of developmental
psychology. Piaget’s ideas have been questioned by modern experimental
developmental psychology, yet remain the most coherent existing conceptual
framework for studying human cognitive development. It turns out to be possible to
create a Piaget-like theory of stages of cognitive development that is specifically
appropriate to uncertain reasoning systems like PLN, in which successive stages
involve progressively sophisticated inference control: simple heuristic control (the
infantile stage); inductive, history-based control (the concrete operational stage);
inference-based inference control (the formal stage); and inference-based modification
P.

…

All rights reserved.
25
Four Contemporary AGI Designs: a
Comparative Treatment
September 26, 2006
Stan FRANKLIN, Ben GOERTZEL, Alexei SAMSONOVICH, Pei WANG
Introduction (by Ben Goertzel)
During his talk at the AGI Workshop, Stan Franklin suggested that his LIDA
architecture might fruitfully be considered not only as a specific AGI design, but also
as a general framework within which to discuss and compare various AGI designs and
approaches. With this in mind, following the workshop itself, I (Goertzel) formulated a
list of simple questions intended to be pertinent to any AGI software design, mostly
based on the conceptual framework presented in Stan Franklin’s workshop presentation
(and represented in this volume by his article “A Foundational Architecture for
Artificial General Intelligence”) with a couple additions and variations. All individuals
who presented talks on AGI architectures at the workshop were invited to respond to
the questionnaire, giving answers appropriate to their own AGI design. Four
individuals (myself, Wang, Franklin and Samsonovich) took up this offer, and their
answers are reported here – without modification; exactly as they gave them.

…

And although we do have some external access to the functioning of the brain,
through brain mapping techniques, signal interventions and post-mortem examination,
our ability to get fine detail, and to link that detail to the cognitive level, is a subject of
fierce debate within the cognitive science community [12]; [13].
4.4.
Frameworks and Quasi-Complete Systems
What does it mean to engage in a program of “systematic exploration” of the space of
cognitive systems? To be systematic, such a program needs to be unified by a common
conceptual framework that is explicit enough that it allows the relationships between
systems to be clearly seen.
The idea of a “framework” is that it defines a set of choices for the broad
architectural features of the class of cognitive systems that it expresses. For every one
of the various mechanisms that we might anticipate being involved in a complete
cognitive system, the framework should have something to say about how that
mechanism is instantiated, and how it relates to the rest of the system.

In this, it helps to focus our attention on:
the dizzying rates of capitalist growth made possible by oil;
the ensuing political transformations that reinforce the oil-dependent economy and block transitions to a safer and fairer way of organizing economic life; and
the perpetual state of crisis that is induced by the imperative of expansion, one that simultaneously undermines the supply of non-renewable fuel and—more importantly—the earth’s lifesupport systems.
Ultimately, this conceptual framework helps us understand not only how Alberta is intimately connected to global oil crises, and the United States energy market in particular, but also how Canada and especially Alberta are riven with political, economic, and environmental problems in attempts to expand tar sands extraction.
Petro-Capitalism
At the centre of the analysis of capitalism’s relation to nature is its inherent and unavoidable dependence on fossil fuels, and particularly on oil.

…

Since capitalism needs infinite economic growth and ever-expanding consumption, its logic essentially compels increasing CO2 emissions that threaten the global ecological system—and indeed life itself.8 The growth imperative drives the continual global search for new (though ultimately finite) oil supplies in ways that often have high energy and resource demands (as in the tar sands, and in hydraulic fracturing for shale oil and gas) and carry a large ecological burden.
In sum, the conceptual framework of petro-capitalism centres oil as the lifeblood of global capitalism, with the power to fundamentally reshape political institutions from global to national to provincial levels. Yet it is also a system in permanent crisis due to its intractable role in climate change and environmental degradation, alongside inevitable challenges to oil supplies.
Global Petro-Capitalist Crises
The contradictory nature of petro-capitalism, which depends upon incessant expansion at ever-higher economic and environmental costs, is coming into sharper focus with the global access to a steady supply of conventional crude oil now in question, and threatening to create a global energy crisis.

…

James, when Jim Munroe was presenting on the traditional system of keyoh (territory or trapline) holdings, a series of Dakelh terms were rendered illegible in the published transcript: “There are laws around that. There’s terms in our language it’s called (native word) and (native word) and it means they did—people disappear if they don’t respect the land and they don’t ask.”21 This silencing of Indigenous terms from the official record reflects an underlying disregard for Indigenous conceptual frameworks, despite the putative inclusions of Aboriginal traditional knowledge in the hearings. These exclusions are rationalized on the basis that the hearings process is recorded in “either of the official languages [French and English], depending on the languages spoken by the participant at the public hearing.”22 The fact that Indigenous languages are unrecognized as “official” languages underlines how, despite the ostensible inclusion of Indigeneity, the underlying framework of regulation remains an imposed and colonial one.

Finance stimulated the development of quantitative models of the future and the maintenance of deep records about the past. Markets taught people about such things as the limitations of the capacity for reason and the dangers of miscalculation. These complex conceptual frameworks augmented and stimulated the development of problem solving, but they also set up a conflict between traditional and quantitative modes of thought. This conflict is heightened during periods of financial innovation and financial disaster. Not only did financial architecture challenge traditional institutions, it also challenged traditional conceptual frameworks for dealing with the unknown. Cultural notions of chance and fortune are embedded in a rich set of symbols, myths, and moral valences. Understanding and managing this conflict remain important challenges to modern society.

…

Financial technology made possible not just financial contracts but also financial thinking—conceptual ways of framing economic interactions that use the financial perspective of time. Borrowing, lending, and financial planning shaped a particular conceptualization of time, quantifying it in new ways and simplifying it for purposes of calculation. This way of thinking and specialized knowledge, in turn, affected and extended the capabilities of government and enterprise. This conceptual framework is what I refer to as the software of finance in the Introduction.
Finance relies on the ability to quantify and calculate and reason mathematically. Thus, much of this chapter focuses on the development of mathematical tools in ancient times. Another basic ingredient of finance is the dimension of time. Finance requires the measurement and expression of time and this chapter explores time technology in some depth.

…

The Sumerians created tools for explicitly quantifying intertemporal contracts, eliminating ambiguity or disagreement between the parties through the invention of notation for economic units and a flexible number system. Writing and numbers brought clarity and precision to the economic arrangements demanded by the Near Eastern economic system.
There is also evidence that financial contracting developed alongside and stimulated conceptual development. Increasing urban density in an economy managed by a common authority required a record system—and a conceptual framework—capable of expressing big numbers. Evidence from early cuneiform appears to document this leap in written expression and perhaps an accompanying shift in arithmetic thought. Likewise, scholars have documented an administrative quantification of time that abstracted from natural, astronomical time. Both of these laid a foundation for the development of further abstractions. The expression of immense quantities was limited only by the human imagination, as was the division of time into infinitesimal slices.

Another was Trivers’ reciprocal altruism,7 which would work only if a replicator could rely on the returned favour coming back precisely to itself, at least most of the time. The exposition of these and other ideas in The Selfish Gene is not only a tour de force of plain speaking but, by relating them all to the same central argument, itself the best representation of Darwinism available, it established a single conceptual framework within which old and new ideas in adaptationism could be understood.
This overarching coherent structure in The Selfish Gene provides the kind of logical foundation and conceptual unity across a broad spectrum of ideas that is usually associated with mathematics. The irony will become clear. First, I want to note that the successes of the book, popular and academic, stem not only from the clarity of exposition and beautiful use of language that are universally acknowledged in Dawkins’ work, but also from the less appreciated, but in intellectual terms much more significant, fundamental contribution to science represented by this foundational structure.

…

Dawkins, on the other hand, has no hesitation about Hamilton being the core of sociobiology. For him sociobiology is in fact ‘the branch of ethology inspired by Bill Hamilton’.27 Accordingly, The Selfish Gene expounds on such things as The Prisoner’s Dilemma as the prototype model for game theoretical reasoning. It teaches the reader to start thinking in terms of strategies, and it provides a common conceptual framework for the core theorists Hamilton, Williams, John Maynard Smith, and Trivers. The book is full of imaginative examples, some of them involving genetic actors as vivid as if they were humans, all in the service of explaining the logic or mechanism of evolution, and all from a gene’s eye perspective.
It fell on Dawkins to be the one to clear up misunderstandings about sociobiology. Wilson had rather soon after Sociobiology moved on to gene-culture coevolutionary models and later on to preserving biodiversity.

Develop and re-use tools for data quality management across a wider variety of scenarios and applications.
This chapter outlines a conceptual framework and approach for data quality analysis that will hopefully serve as a guide for how you think about your data, given the nature of your objective. The ideas presented here are born from (often painful) experience and are likely not new to anyone who has spent any extended time looking at data; but we hope it will also be useful for those newer to the data analysis space, and anyone who is looking to create or reinforce good data habits.
Framework Introduction: The Four Cs of Data Quality Analysis
Just as there are many angles from which to view your data when searching for an answer, there are many viewing angles for assessing quality. Below we outline a conceptual framework that consists of four facets. We affectionately refer to them as The Four Cs of Data Quality Analysis[77]:
Complete: Is everything here that’s supposed to be here?

For example, CATENET would remain a system of independently administered networks, each run by its own people with its own rules. But when time came for one network to exchange data with, say, the ARPANET, the internetworking protocols would operate. The gateway computers handling the transmission couldn’t care about the local complexity buried inside each network. Their only task would be to get packets through the network to the destination host on the other side, making a so-called end-to-end link.
Once the conceptual framework was established, Cerf and Kahn spent the spring and summer of 1973 working out the details. Cerf presented the problem to his Stanford graduate students, and he and Kahn joined them in attacking it. They held a seminar that concentrated on the details of developing the host-to-host protocol into a standard allowing data traffic to flow across networks. The Stanford seminar helped frame key issues, and laid the foundation for solutions that would emerge several years later.

…

The first Sun machines were shipped with the Berkeley version of UNIX, complete with TCP/IP. Berkeley UNIX with TCP/IP would be crucial to the growth of the Internet. When Sun included network software as part of every machine it sold and didn’t charge separately for it, networking exploded.
It further mushroomed because of Ethernet.
While packet radio and SATNET sparked the thinking about a conceptual framework for internetworking, they were largely experimental. Ethernet—the local area network designed by Bob Metcalfe and his colleagues at Xerox PARC back in 1973—was a practical solution to the problem of how to tie computers together, either on a campus or at a company. Xerox began selling Ethernet as a commercial product in 1980. At around the same time, Bob Taylor’s division at Xerox PARC gave a grant to major research universities in the form of Ethernet equipment, powerful computers, and laser printers.

Whilst there is a wealth of evidence on the sort of organisation that supports evidence-based practice, there is much less evidence on the effectiveness of specific interventions to change an organisation to make it more ‘evidence based’—and it is beyond the scope of this book to address this topic comprehensively. Much of the literature on organisational change is in the form of practical checklists or the ‘ten tips for success’ type format. Checklists and tips can be enormously useful, but such lists tend to leave me hungry for some coherent conceptual models on which to hang my own real-life experiences.
The management literature offers not one but several dozen different conceptual frameworks for looking at change—leaving the non-expert confused about where to start. It was my attempt to make sense of this multiplicity of theories that led me to write a series of six articles published a few years ago in the British Journal of General Practice entitled ‘Theories of change’. In these articles, I explored six different models of professional and organisational change in relation to effective clinical practice [39–44]:
1.

Over recent years this relationship has increasingly exhibited what might be described as ‘parasitic’ features, with the private business sector lobbying governments to weaken regulations and cut capital gains taxes, but at the same time reducing its share of investment in basic research and thus relying even more on public spending in this area.21 As we shall show below, future growth will require a very different form of collaboration between public and private sectors, characterised by a healthy symbiosis that is sustainable over the long-term.
Orthodox economic theory and the ‘market failure’ approach
To meet these challenges and achieve the goal of smart, innovation-led growth, we need to develop a new conceptual framework. For this we need to look beyond the narrow assumptions of mainstream economics, which has paid too little attention to the disequilibrating process of innovation. These models continue to assume that innovation is (a) driven mainly by the individual genius of single entrepreneurs, at best ‘facilitated’ by the public sector; (b) only characterised by predictable risk (which can be precisely quantified ex-ante by means of well-defined probability distributions, as in lotteries) rather than true uncertainty; and (c) has the same probability of occurrence at any moment in time.

…

And a much more sophisticated approach needs to be taken to integrating endogenous technological and institutional change within them. The modelling community could fruitfully focus attention on the economic processes which generate knowledge and drive innovation and systemic change. This could prove greatly valuable in designing effective policy.
Climate change policy
A range of policy instruments are required to cut greenhouse gas emissions. Michael Grubb has provided a helpful conceptual framework. He notes how regulatory measures (such as energy efficiency standards) are appropriate where economic activity is characterised by satisficing behaviour; market-based incentives (such as carbon pricing) in the domain of optimising behaviour; and innovation policy (such as deployment subsidies and R&D expenditure) where technological and structural transformation is required.35
Of these, innovation policy is the most complex, and the one where environmental economics has so far had least to say.

It’s the equivalent of a bicycle for our minds.”3
Jobs was almost assuredly inspired by the late Doug Engelbart in this (who in turn was inspired by MIT computer visionary Vannevar Bush). Engelbart, the inventor of the point-and-click computer user interface, and the mouse to use with it, was perhaps the first to embrace the term augmentation, which in his view involved getting machines to perform the mechanical aspects of thinking and idea sharing. In 1962 he published a widely circulated paper: “Augmenting Human Intellect: A Conceptual Framework.”4 He even founded an Augmentation Research Center, which in 1969, by the way, constituted one end of the first Internet link ever made. (The University of California, Los Angeles, was the other end.) Jobs borrowed not only Engelbart’s interface ideas, but also his desire to create “wheels for the mind.”
Going back further, Norbert Wiener, the MIT colleague of Vannevar Bush whom we mentioned earlier as the author of The Human Use of Human Beings, was expressing his hope already in 1950 that machines would free people from the drudgery of repetitive industrial work so that they could focus on more creative pursuits.

In contrast, the modern machinery of accounting, with its numerous
noncash revenues and expenses and the marking of assets and liabilities
to market (fair values)—which constitutes most of the extensive, worldwide accounting rules and regulations—was intended to improve upon the
18
AND YOU THOUGHT EARNINGS ARE THE BOTTOM LINE
“primitive” concept of cash flows. This was made clear by the Financial
Accounting Standards Board (FASB), the exclusive accounting rule-making
body in the United States in its original conceptual framework:
Information about enterprise earnings based on accrual [noncash]
accounting generally provides a better indication of an enterprise’s
present and continuing ability to generate favorable cash flows than
information limited to the financial effects of cash receipts and
payments.7
Obviously, as our research shows, reported earnings, the end product of
accounting measurement and valuation procedures, do not outperform cash
flows, at least for their predicted values to generate investment returns.

…

The
yearly adjusted R2 s shown in Figure 3.4 are obtained from the annual regression of sample firms’ market value on their sales, cost of goods sold, selling,
The Widening Chasm between Financial Information and Stock Prices
39
general, and administrative (SG&A) expenses, net earnings, total assets, and
total liabilities. The samples for all regressions include all US-listed companies with the required data, as retrieved from the intersection of the Compustat and CRSP databases for 1950 to 2013.
NOTES
1. FASB, 2010, Statement of Financial Accounting Concepts No. 8, Conceptual
Framework for Financial Reporting, Chapter 1, Introduction.
2. See Yoree Koh, “Twitter Ad Woes Subside but Growth Stalls,” The Wall Street
Journal (July 29, 2015), B1.
3. Some readers, accustomed to the proliferation of surveys and polls in many
walks of life, may wonder why we don’t survey investors about the relevance of
financial information. First, our main objective is to examine relevance patterns
over half a century, which requires consistent surveys from the 1960s, 1970s,
and on.

Of course, collapse in this context doesn’t mean that everybody died, but that their ways of life radically shifted and often much of the population migrated to other regions. In other words, history provides us with no models of sustainable development other than democratic capitalism.
Every one of these earlier ultimately unsustainable societies was what economics Nobelist Douglass North and his colleagues call, in Violence and Social Orders: A Conceptual Framework for Interpreting Recorded Human History, “natural states.” Natural states are basically organized as hierarchical patron-client networks in which small, militarily potent elites extract resources from a subject population. The basic deal is a Hobbesian contract in which elites promise their subjects an end to the “war of all against all” in exchange for wealth and power.
Natural states operate by limiting access to valuable resources—that is to say, by creating and sharing the rewards of monopolies.

The term `information theory' is an appealing but unfortunate label, which continues to cause endless misunderstandings. Shannon came to regret its widespread popularity, and I shall avoid it in this context.
MTC is the theory that lies behind any phenomenon involving data encoding and transmission. As such, it has had a profound impact on the analyses of the various kinds of information, to which it has provided both the technical vocabulary and at least the initial conceptual framework. It would be impossible to understand the nature of information without grasping at least its main gist. This is the task of the present chapter.
The mathematical theory of communication
MTC treats information as data communication, with the primary aim of devising efficient ways of encoding and transferring data.
7. The mathematical theory of communication (MTC)
It has its origin in the field of electrical engineering, as the study of communication limits, and develops a quantitative approach to information.

Here again, the French experience is quite relevant to today’s world, where many commentators continue to believe, as Leroy-Beaulieu did a little more than a century ago, that ever more fully guaranteed property rights, ever freer markets, and ever “purer and more perfect” competition are enough to ensure a just, prosperous, and harmonious society. Unfortunately, the task is more complex.
The Theoretical and Conceptual Framework
Before proceeding, it may be useful to say a little more about the theoretical and conceptual framework of this research as well as the intellectual itinerary that led me to write this book.
I belong to a generation that turned eighteen in 1989, which was not only the bicentennial of the French Revolution but also the year when the Berlin Wall fell. I belong to a generation that came of age listening to news of the collapse of the Communist dicatorships and never felt the slightest affection or nostalgia for those regimes or for the Soviet Union.

…

Malthus, Young, and the French Revolution
Ricardo: The Principle of Scarcity
Marx: The Principle of Infinite Accumulation
From Marx to Kuznets, or Apocalypse to Fairy Tale
The Kuznets Curve: Good News in the Midst of the Cold War
Putting the Distributional Question Back at the Heart of Economic Analysis
The Sources Used in This Book
The Major Results of This Study
Forces of Convergence, Forces of Divergence
The Fundamental Force for Divergence: r > g
The Geographical and Historical Boundaries of This Study
The Theoretical and Conceptual Framework
Outline of the Book
Part One: Income and Capital
1. Income and Output
The Capital-Labor Split in the Long Run: Not So Stable
The Idea of National Income
What Is Capital?
Capital and Wealth
The Capital/Income Ratio
The First Fundamental Law of Capitalism: α = r × β
National Accounts: An Evolving Social Construct
The Global Distribution of Production
From Continental Blocs to Regional Blocs
Global Inequality: From 150 Euros per Month to 3,000 Euros per Month
The Global Distribution of Income Is More Unequal Than the Distribution of Output
What Forces Favor Convergence?

We will elaborate the three primary
aspects of immaterial labor in the contemporary economy: the
communicative labor of industrial production that has newly become linked in informational networks, the interactive labor of
symbolic analysis and problem solving, and the labor of the production and manipulation of affects (see Section 3.4). This third aspect,
with its focus on the productivity of the corporeal, the somatic, is
an extremely important element in the contemporary networks of
biopolitical production. The work of this school and its analysis
of general intellect, then, certainly marks a step forward, but its
conceptual framework remains too pure, almost angelic. In the ﬁnal
analysis, these new conceptions too only scratch the surface of the
productive dynamic of the new theoretical framework of biopower.17
Our task, then, is to build on these partially successful attempts
to recognize the potential of biopolitical production. Precisely by
bringing together coherently the different deﬁning characteristics
of the biopolitical context that we have described up to this point,
and leading them back to the ontology of production, we will be
able to identify the new ﬁgure of the collective biopolitical body,
which may nonetheless remain as contradictory as it is paradoxical.

…

In this way intervention is an effective mechanism that
through police deployments contributes directly to the construction
of the moral, normative, and institutional order of Empire.
Royal Prerogatives
What were traditionally called the royal prerogatives of sovereignty
seem in effect to be repeated and even substantially renewed in the
construction of Empire. If we were to remain within the conceptual
framework of classic domestic and international law, we might be
BIOPOLITICAL PRODUCTION
tempted to say that a supranational quasi-state is being formed. That
does not seem to us, however, an accurate characterization of the
situation. When the royal prerogatives of modern sovereignty reappear in Empire, they take on a completely different form. For
example, the sovereign function of deploying military forces was
carried out by the modern nation-states and is now conducted by
Empire, but, as we have seen, the justiﬁcation for such deployments
now rests on a state of permanent exception, and the deployments
themselves take the form of police actions.

…

The
institutions that constitute civil society functioned as passageways
that channel ﬂows of social and economic forces, raising them up
toward a coherent unity and, ﬂowing back, like an irrigation network, distribute the command of the unity throughout the immanent social ﬁeld. These non-state institutions, in other words, organized capitalist society under the order of the state and in turn
spread state rule throughout society. In the terms of our conceptual
framework, we might say that civil society was the terrain of the
becoming-immanent of modern state sovereignty (down to capitalist
society) and at the same time inversely the becoming-transcendent
of capitalist society (up to the state).
In our times, however, civil society no longer serves as the
adequate point of mediation between capital and sovereignty. The
structures and institutions that constitute it are today progressively
withering away.

Both oppose arrogance: the Principle of Mediocrity opposes the pre-Enlightenment arrogance of believing ourselves significant in the world; the Spaceship Earth metaphor opposes the Enlightenment arrogance of aspiring to control the world. Both have a moral element: we should not consider ourselves significant, they assert; we should not expect the world to submit indefinitely to our depredations.
Thus the two ideas generate a rich conceptual framework that can inform an entire world view. Yet, as I shall explain, they are both false, even in the straightforward factual sense. And in the broader sense they are so misleading that, if you were seeking maxims worth being carved in stone and recited each morning before breakfast, you could do a lot worse than to use their negations. That is to say, the truth is that
People are significant in the cosmic scheme of things; and
The Earth’s biosphere is incapable of supporting human life.

…

The ‘naive’ audience’s mistake is a form of parochialism: they observe a phenomenon – people phoning in because their watches stopped – but they are failing to understand it as part of a wider phenomenon, most of which they do not observe. Though the unobserved parts of that wider phenomenon have in no way affected what we, the viewers, observe, they are essential to its explanation. Similarly, common sense and classical physics contain the parochial error that only one history exists. This error, built into our language and conceptual framework, makes it sound odd to say that an event can be in one sense extremely unlikely and in another certain to happen. But there is nothing odd about it in reality.
We are now seeing the interior of the spaceship as an overwhelmingly complex jumble of superposed objects. Most locations on board are packed with people, some of them on very unusual errands, and all unable to perceive each other.

…

Short-lived rapid changes have always happened: famines, plagues and wars have begun and ended; maverick kings have attempted radical change. Occasionally empires were rapidly created or whole civilizations were rapidly destroyed. But, while a society lasted, all important areas of life seemed changeless to the participants: they could expect to die under much the same moral values, personal lifestyles, conceptual framework, technology and pattern of economic production as they were born under. And, of the changes that did occur, few were for the better. I shall call such societies ‘static societies’: societies changing on a timescale unnoticed by the inhabitants. Before we can understand our unusual, dynamic sort of society, we must understand the usual, static sort.
For a society to be static, all its memes must be unchanging or changing too slowly to be noticed.

Outlook for Waterborne Commerce through the Port of New York. 1948.
_. The Port of New York. 1952.
_. “Proposal for Development of the Municipally Owned Waterfront and Piers of New York City.” February 10, 1948.
_. Via—Port of New York.
Port of Seattle, Marine Planning and Development Department. “Container Terminal Development Plan.” October 1991.
Port of Seattle, Planning and Research Department. “A Conceptual Framework for the Physical Development of the Port of Seattle.” April 1966.
Port of Singapore Authority. Annual Report and Accounts. Various years.
_. A Review of the Past and a Look into the Future. Singapore: Port of Singapore Authority, 1971.
Scottish Executive. “Container Transshipment and Demand for Container Terminal Capacity in Scotland.” Transport Research Institute, Napier University, Edinburgh, December 2003.

Thus we have a concept of surface structure defined in terms of rules that generate an infinite set of objects, standing in opposition to deep structure, and considerably more abstract than before, in that properties of deep structure are captured through trace theory.
On the other hand, suppose one were to discover that the structuralist concept of phoneme plays a very important role, previously unsuspected. Suppose that the arguments that have been advanced against the existence of a phonemic level could be surmounted within another conceptual framework. That would not be a return to an old idea, but an advance to a new idea, giving a new significance to an old concept. That would be progress.
When theories and the concepts that appear in them are personalized, one looks to see “who” is wrong; but that is not the correct way of thinking. That “who” may have been right in the context of his or her own time, wrong in the context of a richer theory, and will perhaps prove right once again.

…

We might try to approach the classic problem of accounting for action that is appropriate to situations but uncontrolled by stimuli in these terms. Given a partially structured system that provides an evaluation of outcomes, choices that are random except for maximizing “value” may have the appearance of free, purposeful, and intelligent behavior—but one must remain skeptical about this approach, though it is the only one that seems to fall within any conceptual framework intelligible to us.
Within cognitive capacity, the theory of mind has a distinctly rationalist cast. Learning is primarily a matter of filling in detail within a structure that is innate. We depart from the tradition in several respects, specifically, in taking the “a priori system” to be biologically determined. 6 Outside the bounds of cognitive capacity, an empiricist theory of learning applies, by unfortunate necessity.

To study contemporary capitalism, I argue, sociology must go back before the disciplinary division of labour with economics negotiated on its behalf by its twentieth century founding figure, Talcott Parsons.2 For this it will be helpful to rediscover the sociology in classical economists from Smith to Pareto, Marshall, Keynes and Schumpeter, and the economics in classical sociologists like Weber, Sombart, Mauss and Veblen, to name only a few. Particular interest might usefully be paid to the institutional economics of the Historische Schule and to Marx the social theorist, as opposed to the deterministic economist. The lesson to be learned from all of them is that capitalism denotes both an economy and a society, and that studying it requires a conceptual framework that does not separate the one from the other.
How to study contemporary capitalism, then? My first answer is: not as an economy but as a society – as a system of social action and a set of social institutions falling in the domain of sociological rather than today’s standard economic theory.3 This is in fact the tradition of political economy in the nineteenth century. Political-economic theory was to identify the actors and interests underlying, or hiding behind, the ‘laws of movement’ of ‘the economy’, translating economic relations into social relations and showing the former to be a special case of the latter.

…

For a time, the dependence of politics and political success under democratic capitalism on uninterrupted capital accumulation – or in the technocratic language of standard economics: on economic growth – led inevitably optimistic politicians to place their hopes on riding the tiger and jump on the historical bandwagon towards liberalization and deregulation until the re-formed capitalist economic regime almost crashed as a result of its unfettered progress.
It may seem like hairsplitting if I now ask, in Block’s terms as he reconstructs the Polanyian conceptual framework, whether the current crisis was due to the capitalist ‘economy’ having become misembedded or disembedded. Block declares the latter to be impossible, due to economic action always and inevitably being social action. But while one can fully and indeed emphatically agree with this, as I do,26 there is no logical need to conclude from it that a capitalist political economy is governed by a primacy of politics.

They also tell us what not to look for and what questions not to ask, which is why those astronomers didn’t bother looking for a giant intergalactic battleship warping Uranus’s orbit instead. These are invaluable directives, prerequisite to doing science—or, for that matter, to doing much of anything. As Alan Greenspan pointed out, during a moment in his congressional testimony when he seemed to be coming under fire for merely possessing a political ideology, “An ideology is a conceptual framework, the way people deal with reality. Everyone has one. You have to. To exist, you need an ideology.”
Greenspan was right. To exist, to deal with reality, we need a conceptual framework: theories that tell us which questions to ask and which ones not to, where to look and where not to bother. When that framework serves us well—when, say, it spares us the effort of asking what other kinds of long things a giraffe might possess, or taking seriously the proposition that behind a certain shaded rectangle we might encounter a certain naked movie star—we call it brilliant, and call it inductive reasoning.

Consumers have moved progressively from purchasing in local stores to shopping in large stores, including discount “big-box” stores—where businesses may also buy some of their supplies. Now spending is shifting online. A third example is estimating the value of income received in the form of deferred stock options, once a small part of total remuneration but now quite significant.
The actual number for GDP is therefore the product of a vast patchwork of statistics and a complicated set of processes carried out on the raw data to fit them to the conceptual framework.
The “Production Boundary”
As if all of this were not enough, there are some important conceptual questions about the GDP definition, some of which will be followed up in later chapters. The definitions have evolved over the years, and there are areas of active debate among national statistics experts.
Much of GDP is private-sector output or expenditure measured at the prices charged in the market, as mentioned earlier.

If one could precisely measure the elasticity of substitution between capital and labor and the elasticity of the supply of capital, it would be possible in principle to determine the optimal amount of capital-labor redistribution and the best instruments to achieve it. The intellectual and political conflict over redistribution is about more than just the measurement of elasticities, however. Indeed, this whole conceptual framework implicitly assumes that we accept the rules of the market economy and the allocative role of the price system. This is obvious in the case of the elasticity of capital supply (why should society give in to the threat of capitalist households to save less if they deem the rate of return on capital to be too low?). It is just as important, though, when it comes to the elasticity of capital/labor substitution: why should firms use more capital and less labor if the relative price of labor rises?

Hustle is about getting things done, just like Fabian Ruiz did during his period of incarceration. It’s about being resourceful, opportunity-driven, and frugal—learning to do a lot with a little, to create your own destiny. A flexible mind-set allows you to move seamlessly from one world to another, borrowing from one and bringing new perspective to the next. An effective hustler does this seamlessly, transposing conceptual frameworks, making useful and valuable connections, and bringing skills and competencies from one area to another.
When you are hustling, there is no master plan; you are improvising and being responsive to what life throws your way. The hustle is about spotting an idea and just going for it. You don’t need massive resources, a perfect team, or the right environment. Much of innovation comes from constraint—from challenge and even scarcity.

In truth, when things burn in common air, something is being extracted from the air, not the reverse: oxygen molecules are bonding in the heat of combustion with whatever happens to be on fire. This is what we now call oxidation. When the air loses too many oxygen molecules to support the oxidation process, the flame goes out.
Priestley, alas, was on the wrong end of the phlogiston paradigm, and so when he happened upon an air in which flames burned more brightly than common air, he interpreted his findings using the conceptual framework of the existing paradigm. Breathable air that also exacerbated combustion was, logically, air that had been entirely emptied of phlogiston. (Or, put another way, it was air primed to be filled with phlogiston.) Within the rules of that conceptual system, Priestley’s dephlogisticated air was a fitting, if ungainly, appellation. Unfortunately, the rules of that system were fundamentally flawed.

This is the principle embedded in the multistakeholder theory (what the World Economic Forum communities often call the Spirit of Davos), which I first proposed in a book published in 1971.70 Boundaries between sectors and professions are artificial and are proving to be increasingly counterproductive. More than ever, it is essential to dissolve these barriers by engaging the power of networks to forge effective partnerships. Companies and organizations that fail to do this and do not walk the talk by building diverse teams will have a difficult time adjusting to the disruptions of the digital age.
Leaders must also prove capable of changing their mental and conceptual frameworks and their organising principles. In today’s disruptive, fast-changing world, thinking in silos and having a fixed view of the future is fossilizing, which is why it is better, in the dichotomy presented by the philosopher Isaiah Berlin in his 1953 essay about writers and thinkers, to be a fox than a hedgehog. Operating in an increasingly complex and disruptive environment requires the intellectual and social agility of the fox rather than fixed and narrow focus of the hedgehog.

The status of the population will be settled “with the coming of peace.” This mode of operation allows Israel to continue to present itself as a “democratic state” and enjoy the many benefits attached to this status in the international arena.
Hence “the peace process” and talk about “two states for two peoples” are not in any contradiction with the occupation, not even the “temporary occupation” of 1967. They are a political and conceptual framework designed to enable and perpetuate the status quo for as long as possible.
Israel would find it hard to market this façade to the world if it were not assisted by many others, some serving their self-interests and others out of misled good intentions. The leadership of the Palestinian national movement also plays a key role in providing credibility for the fake peace process. It is followed by a large part of the leadership of the Palestinian Arab population within the Green Line.

Invoking Peirce’s understanding of scientific method and scientific growth that appeals to the concept of abduction, which puts limits on what count as “admissible hypotheses,” he argues that innate structures that are determined by our genetic endowment set limits to the questions that we can formulate. The questions we can tractably formulate are called “problems,” but given the limits within which their formulation is so much as possible, there will be things that escape our cognitive powers; to the extent that we can even think them, we will, given our current conceptual frameworks and knowledge, find ourselves unable to formulate them in a way that a tractable form of scientific inquiry of them can be pursued. These he calls “mysteries.” The title of this book, What Kind of Creatures Are We?, is directly addressed by this, since other sorts of creatures, with a different biological endowment from ours, may be able to formulate problems that remain mysteries to us.

Good ideas are not conjured out of thin air; they are built out of a collection of existing parts, the composition of which expands (and, occasionally, contracts) over time. Some of those parts are conceptual: ways of solving problems, or new definitions of what constitutes a problem in the first place. Some of them are, literally, mechanical parts. To go looking for oxygen, Priestley and Scheele needed the conceptual framework that the air was itself something worth studying and that it was made up of distinct gases; neither of these ideas became widely accepted until the second half of the eighteenth century. But they also needed the advanced scales that enabled them to measure the minuscule changes in weight triggered by oxidation, technology that was itself only a few decades old in 1774. When those parts became available, the discovery of oxygen entered the realm of the adjacent possible.

This is what transpired at the moment of truth, the beginning of the new intifada, which shattered the fantasy of "the end of the conflict" and "peace is around the corner." The collapse of the political course imposed by Barak under the auspices of Clinton, the widespread feeling that Israel had put the Palestinians to the test and they had failed it—thus, in effect, betraying the entire peace camp, the violent Palestinian eruption—all these created a situation in which most Israeli Jews were forced to expose the fundamental conceptual framework within which they perceive political reality. This contradictory framework portrays the relationship between the two peoples as a supposedly symmetrical relationship between two national movements of equal standing, at the same time accepting the occupation as a paradigm of asymmetrical power relations and ignoring its continuous history. It dictates a willingness to make "concessions," alongside an imperious demand that the other side gratefully recognize the generous handout.

Merely having the ability to be highly productive, relaxed, and in control doesn’t make you that way. If you’re like most people, you can use a coach—someone to walk you step by step through the experience and provide some guideposts and handy tricks along the way, until your new operational style is elegantly embedded.
You’ll find that in part 2.
part 2
Practicing Stress-Free Productivity
4
Getting Started: Setting Up the Time, Space, and Tools
IN PART 2 we’ll move from a conceptual framework and limited application of workflow mastery to full-scale implementation and best practices. Going through this program often gives people a level of relaxed control they may never have experienced before, but it usually requires the catalyst of step-by-step procedures to get there. To that end, I’ll provide a logical sequence of things to do, to make it as easy as possible for you to get on board and glean the most value from these techniques.

The zero expected profit condition underlying the Roll and
asymmetric information models is conditional on a trade, in which event
the dealer simply recovers his costs. The exceptions are the inventory
control models where the dealer generally prefers to trade in one direction
or the other. In this section we investigate a customer’s order choice when
there is a preference for trade. The conceptual framework is based on
Cohen, Maier, Schwartz, and Whitcomb (1981) (CMSW). Related papers
include Angel (1994) and Harris (1998).
We can set up a simple order choice situation by building on the model
considered in section 11.2. In one version of that model, a dealer already at
his portfolio optimum set his bid to include compensation for being pulled
away from that optimum. The bid is defined by the condition that the
expected utility is the same whether or not a customer actually executes
against the bid.

Similarly, to say one wishes to create a “rational” social order implies that current social arrangements might as well have been designed by the inhabitants of a lunatic asylum. Now, surely, all of us have felt this way at one time or another. But if nothing else, it is an extraordinarily intolerant position, since it implies that one’s opponents are not just wrong, but in a certain sense, wouldn’t even know what it would mean to be right, unless, by some miracle, they could come around and accept the light of reason and decide to accept your own conceptual framework and point of view.
This tendency to enshrine rationality as a political virtue has had the perverse effect of encouraging those repelled by such pretentions, or by the people who profess them, to claim to reject rationality entirely, and embrace “irrationalism.” Of course, if we simply take rationality in its minimal definition, any such position is absurd. You can’t really make an argument against rationality, because for that argument to be convincing, it would itself have to be framed in rational terms.

You form your
expectations about the future with information technical systems don't take
into consideration. Consequently, this sets up a conflict between what your
intellect says should be happening and the purely mathematical means of
predicting human behavior afforded by your technical system. This is
precisely why technical systems are so difficult to relate to and execute.
People aren't taught to think in terms of probabilities—and we certainly
don't grow up constructing a conceptual framework that correlates a
prediction of mass human behavior in statistical odds by means of a
mathematical formula.
To be able to execute your trading systems properly, you will need to
incorporate two concepts into your mental framework—thinking in terms
of probabilities and correlating the numbers or the mechanics of your system
to the behavior. Unfortunately, the only way you can really learn these
things is actually to experience them by executing your system.

The program, and its input data, exist together on the tape as sequences of symbols. And second, because of the arbitrarily long length of the tape, a Turing machine has the ability to “remember” what has happened in the arbitrarily distant past.
In developing this view of a computing machine, Turing was not suggesting it as a practical design for an actual machine. Rather, as a mathematician he used his machines as a conceptual framework in which to study the limits on just what mechanistic devices can actually compute. Indeed, the title of his 1936 paper, “On Computable Numbers, with an Application to the Entscheidungsproblem’’—that final tongue-twister translates as “the decision problem’’—clearly shows Turing’s intent. His great accomplishment was to show that not all the numbers we can imagine are in fact actually computable.

“I was so linked to the Watson achievement,” he explains, “that I felt I was almost losing my identity.” Working for a hedge fund, he concedes, was never part of his career plan. But the appeal of working in a smaller environment in an entirely new field for him—applying artificial intelligence techniques to modeling the economy—won him over. Bridgewater strives to combine theory with data in a conceptual framework that the investment firm’s founder, Ray Dalio, terms the “economic machine.”
That approach to investment, says Ferrucci, is in sync with his current thinking in what he calls “my 30-year journey in artificial intelligence.” Decades ago, the main focus of artificial intelligence research was to develop knowledge rules and relationships to make so-called expert systems. But those systems proved extremely difficult to build.

Scenario 1: from the work society to the knowledge society
Many authors chase away, as if it were a troublesome fly, the human concern that the revolutionary rationalization based on information technology is designed, if not to eliminate, then to thin out paid employment. Two basic elements here reinforce each other: the way in which economists think in terms of models (some would say: their model Platonism); and the historical experience of the first modernity, in which the workers' fears that they would be replaced by machines proved for long to be unfounded.
The conceptual framework of classical economics excludes in principle the notion that the work society could run out of paid jobs. In the model of homo oeconomicus, only certain prevailing conditions – too high a price for labour, fossilized bureaucratic structures, state intervention – can hinder the creation of new jobs. The historical variant of a capitalism without work does not even come into consideration.

"The future of
warfare," the journal of the Army War College declared, "lies in the
streets, sewers, highrise buildings, and sprawl of houses that form the
broken cities of the world.... Our recent military history is punctuated
with city names — Tuzla, Mogadishu, Los Angeles [!], Beirut, Panama
City, Hue, Saigon, Santo Domingo — but these encounters have been
but a prologue, with the real drama still to come."8
To help develop a larger conceptual framework for MOUT, military
planners turned in the 1990s to Dr. Strangelove's old alma mater, the
Santa Monica-based RAND Corporation. RAND, a nonprofit think
tank established by the Air Force in 1948, was notorious for wargaming nuclear Armageddon in the 1950s and for helping to strategize
the Vietnam War in the 1960s. These days RAND does cities: its
researchers ponder urban crime statistics, inner-city public health, and
the privatization of public education.

(According to market design guru Al Roth, one theory holds that the term “fraternity/sorority rush,” which today describes the process by which sororities and fraternities recruit new members, comes from the frenzied competition among sororities to lock in new members.4) It’s what prompted medical residency programs to develop a centralized clearinghouse in the 1940s to fend off students receiving exploding offers before they were done with their intro to anatomy course.
These allocation problems all now have centralized clearinghouses, many designed with the basic deferred acceptance algorithm as their foundations. But that’s really all that Gale and Shapley provided: a conceptual framework that market designers have, for several decades now, been applying, evaluating, and refining. They’ve learned from its successes and, unfortunately, learned even more from its inevitable failures: modeling real-life exchanges is an imprecise, iterative process in which many of us find ourselves as experimental subjects.
The Complicated Job of Engineering Matches
Market designer Al Roth likes to use a bridge-building metaphor to explain the contrast between his own work and that of design pioneers like Shapley.

This can be seen through a careful examination of the work of Charles Darwin. The quintessential phase transition in science, and paradigm shift, is that of the theory of evolution by natural selection. Everything in biology prior to evolution was sophisticated stamp collecting, ordering the living world around us and exploring its wonders. With the advent of evolution, biologists finally had a conceptual framework to make sense of the facts surrounding them. But the acceptance of evolution wasn’t immediate. While On the Origin of Species was a bestselling book, it did not find universal agreement within the Victorian populace.
The same was true of the scientists themselves. David Hull, a philosopher of science, examined many of Darwin’s well-known contemporaries to see who eventually accepted the theory of natural selection, and how long it took them to do so.

And whereas the basic income debate under President Johnson had begun when experts signaled unemployment as becoming endemic, Nixon now spoke of joblessness as a “choice.” He deplored the rise of big government, even though his plan would distribute cash assistance to some 13 million more Americans (90% of them working poor).
“Nixon was proposing a new kind of social provision to the American public,” writes the historian Brian Steensland, “but he did not offer them a new conceptual framework through which to understand it.”4 Indeed, Nixon steeped his progressive ideas in conservative rhetoric.
What, we may well ask, was the president doing?
There is a brief anecdote that explains it. On August 7 of that same year, Nixon told Moynihan that he’d been reading biographies of the British Prime Minister Benjamin Disraeli and the statesman Lord Randolph Churchill (the father of Winston).

They can deal with shocks that might disrupt transport systems (strikes, civil unrest, and severe weather) and also with fuel price hikes that might result from peak oil and global shortages of oil as India, China and Brazil accelerate their “progress” towards Californian or Swedish levels of car ownership and use. It will be a mistake of some considerable historical significance not to build resilient cities.
Holger and Dalkmann (2007) have provided a coherent structure that locates e-mobility in the sustainable transport conceptual framework. They call this the “Avoid, Shift, Improve” strategy or for short ASI.
A= Avoid so that through land use planning and accessibility planning destinations are co-located with residential areas and distances are kept short. This leads to a lower level of car use and a higher level of use of non-motorised transport. Curitiba in Brazil and Singapore have developed spatial strategies and land use patterns that lead to lower CO2 emissions from transport than cities that pursue low density developments or extensive suburbanisation.

So why not set a course for one of the biggest and most famous—and by far the most frequently visited—museums in the world? We’re in need of some culture. We’ll go to the Louvre. We may even catch a glimpse of the broken statue that moved Rilke to such lyrical ecstasies.
But right away, things start getting confused. Not the travel arrangements or the scheduling of the visits, but the conceptual framework that underwrites the journey. The whole idea of “culture,” that is. In Keywords, his indispensable glossary of modern thought, the literary scholar Raymond Williams observes that “culture is one of the two or three most complicated words in the English language.” In some of its early senses, it is nearly synonymous with education, referring to the growth and tending of young minds. It suggests farming and gardening—agriculture, horticulture, cultivation—but it also carries with it the burdens of civilization.

Computing Surveys (ACM) 3, issue 4es (December).
Another pioneer in the 1960s who was inspired by Bush was Douglas En glebart, who founded a research lab with the goal of “augmenting human intellect.” His lab developed a hypermedia groupware system called Augment (originally called NLS). Augment supported bookmarks, hyperlinks, recording of e-mail, a journal, and more.
Engelbart, Douglas C. “Augmenting Human Intellect: A Conceptual Framework. Summary Report AFOSR-3223 Under Contract AF 49(638)- 1024,” SRI Project 3578 for Air Force Office of Scientific Research. Menlo Park, Calif.: Stanford Research Institute, October 1962.
———. “Authorship Provisions in AUGMENT.” COMPCON ’84 Digest: Proceedings of the COMPCON Conference, San Francisco, California, February 27-March 1, 1984, 465-72.
Many others besides us have noted the inadequacy of conventional computer file systems.

Longevity: To the Limits and Beyond (Research and Perspectives in Longevity)
by
Jean-Marie Robine,
James W. Vaupel,
Bernard Jeune,
Michel Allard

According to the explications of Rowe and Kahn, successful aging is found
where extrinsic factors either do not contribute to the negative effects of aging or
116
L. W. Poon et al.
slow down the effects of aging. Two logical questions are whether and how successful aging contributes to longevity in general on the one hand, and to individual differences in longevity on the other hand.
Given this conceptual framework, the search for sensitive predictors of successful aging is a complex task. We have only begun to untangle the web of complexity. At least six levels of complexity need to be addressed in the study of successful aging.
One, the outcome criteria of successful aging need to be defined, and they
may be different for different purposes. One may define success by its quality or
quantity or a combination thereof.

pages: 296words: 78,112

Devil's Bargain: Steve Bannon, Donald Trump, and the Storming of the Presidency
by
Joshua Green

(Amid charges of cocaine use from ex-employees, which Kwatinetz denied, and unexplained absences, Kwatinetz eventually left The Firm.)
Bannon didn’t stick around for the revolution. By 2005, he had left Hollywood for the other side of the globe, Hong Kong, where he became involved in what was undoubtedly the strangest business of any in his kaleidoscopic career—one that introduced him to a hidden world, burrowed deep into his psyche, and provided a kind of conceptual framework that he would later draw on to build up the audience for Breitbart News, and then to help marshal the online armies of trolls and activists that overran national politics and helped give rise to Donald Trump.
The business centered on a video game, World of Warcraft, a so-called “massively multiplayer online role-playing game” (MMO), whose 10 million subscribers competed against one another in the mythical realm of Azeroth, a fantasy world of elves, dwarfs, trolls, goblins, and dragons.

International
Migration Review, 31(4), 923–960.
Saleebey, D. (2002). Introduction: Power in the
people. In Saleebey, D. (ed). The Strengths
Perspective in social work practice. 3rd edition.
Boston, MA: Allyn & Bacon.
Segal, U. (2002). A framework for immigration: Asians
in the United States. New York, NY: Columbia
University Press.
Serageldin, I. (1999). Foreword. In Feldman, T.R. &
Assaf, S. Social capital: Conceptual frameworks and
empirical evidence. Social Capital Initiative,
working paper #5. Washington, DC: World Bank.
Tinker, H. (1995). The British colonies of settlement.
In Cohen, R. (ed). The Cambridge survey of world
migration. Cambridge, UK: Cambridge University
Press, 14–20.
Tsay, C-l. & Hayase, Y. (eds). (2001). Special issue:
International migration and structural change in
the APEC member economies.

…

London,
Routledge.
226
Nations with Increasing Immigrant Populations
Hughes, G. (2005). Annual Report on Statistics on
Migration, Asylum, and Return: Ireland 2002.
Dublin, Economic and Social Research Institute.
Interdepartmental
Committee
on
Non-Irish
Nationals (1987). Interim Report on Applications
for Refugee Status 25–11–1993. Dublin, Government Publications.
Marx, E. (1990). ‘‘The Social World of Refugees: A
Conceptual Framework.’’ Journal of Refugee
Studies, Vol. 3, No. 3.
National Economic and Social Council of Ireland
(2006). Managing Migration in Ireland: A Social
and Economic Analysis; A Report by the
International Organisation for Migration for the
National Economic and Social Council of Ireland.
Report Number 116, Dublin, NESC.
Neckerman, K. M., Carter, P., and Lee, J. (1999).
‘‘Segmented Assimilation and Minority Cultures of
Mobility.’’

His observation, back in 2002, was that, of all the scientists and engineers working in the AI field:
1. 80 percent don’t believe in the concept of General Intelligence (but instead, in a large ­collection of specific skills and knowledge).
2. Of those that do, 80 percent don’t believe it’s possible – either ever, or for a long, long time.
3. Of those that do, 80 percent work on domain-specific AI projects for reasons of commercial or academic politics (results are a lot quicker).
4. Of those left, 80 percent have the wrong conceptual framework.
5. And nearly all of the people operating under basically correct conceptual premises lack the resources to adequately realize their ideas.
I think Peter’s argument is basically on-target. Of course, the 80 percent numbers are crude approximations, and most of the concepts involved are fuzzy in various ways. But an interesting observation is that, whatever the percentages actually are, most of them have decreased considerably since 2002.

Lambton, and Bernard Lewis, eds., The Cambridge History of Islam. Vol. I: The Central Islamic Lands (New York: Cambridge University Press, 1970), pp. 64–65.
11
Fred M. Donner, “The Formation of the Islamic State,” Journal of the American Oriental Society 106, no. 2 (1986): 283–96.
12
See, for example, Douglass C. North, Barry R. Weingast, and John Wallis, Violence and Social Orders: A Conceptual Framework for Interpreting Recorded Human History (New York: Cambridge University Press, 2009), who tend to see the state as a collective action problem among a group of relatively equal oligarchs.
13
One of the practical consequences of this was that monarchs often intervened to lower the predatory taxes imposed by local elites on their dependent populations. Hodgson, The Venture of Islam, pp. 281–82; Donner, “The Formation of the Islamic State,” pp. 290–91.
14
See Bernard Lewis, “Politics and War,” in Schacht, The Legacy of Islam, pp. 164–65.
15
Holt, Cambridge History of Islam, p. 72.
16
Donner, The Early Islamic Conquests, p. 258.
17
Ibid., p. 263.
18
For general background, see David Ayalon, Islam and the Abode of War: Military Slaves and Islamic Adversaries (Brookfield, VT: Variorum, 1994).
19
On the rise of the Abbasids, see Hugh N.

…

New York: Cambridge University Press.
———. 1973. The Rise of the Western World: A New Economic History. New York: Cambridge University Press.
———, and Barry R. Weingast. 1989. “Constitutions and Commitment: The Evolution of Institutions Governing Public Choice in Seventeenth-Century England.” Journal of Economic History 49(4):803–32.
———, Barry R. Weingast, and John Wallis. 2009. Violence and Social Orders: A Conceptual Framework for Interpreting Recorded Human History. New York: Cambridge University Press.
Olson, Mancur. 1965. The Logic of Collective Action: Public Goods and the Theory of Groups. Cambridge, MA: Harvard University Press.
———. 1982. The Rise and Decline of Nations. New Haven: Yale University Press.
———. 1993. “Dictatorship, Democracy, and Development.” American Political Science Review 87(9):567–76.

For a survey of existing definitions, see Rachel Kleinfeld, “Competing Definitions of the Rule of Law,” in Thomas Carothers, ed., Promoting the Rule of Law Abroad: In Search of Knowledge (Washington, D.C.: Carnegie Endowment, 2006).
2. S. N. Eisenstadt, Traditional Patrimonialism and Modern Neopatrimonialism (Beverly Hills, CA: Sage, 1973).
3. Douglass C. North, John Wallis, and Barry R. Weingast, Violence and Social Orders: A Conceptual Framework for Interpreting Recorded Human History (New York: Cambridge University Press, 2009).
4. Daron Acemoglu and James A. Robinson, Why Nations Fail: The Origins of Power, Prosperity, and Poverty (New York: Crown, 2012).
5. For definitions of these terms, see Huntington, Political Order in Changing Societies, pp. 12–24; also the discussion in Francis Fukuyama, The Origins of Political Order: From Prehuman Times to the French Revolution (New York: Farrar, Straus and Giroux, 2011), pp. 450–51.
6.

…

American Political Science Review 102(1):19–31.
Niskanen, William A. 1973. Bureaucracy—Servant or Master? Lessons from America. London: Institute of Economic Affairs.
North, Douglass C., and Robert Paul Thomas. 1973. The Rise of the Western World: A New Economic History. New York: Cambridge University Press.
North, Douglass C., John Wallis, and Barry R. Weingast. 2009. Violence and Social Orders: A Conceptual Framework for Interpreting Recorded Human History. New York: Cambridge University Press.
Nunn, Nathan. 2007. “Historical Legacies: A Model Linking Africa’s Past to Its Current Underdevelopment.” Journal of Development Economics 83(1):157–75.
______. 2008. “The Long-Term Effects of Africa’s Slave Trades.” Quarterly Journal of Economics 123(1):139–76.
Nye, Joseph S., Jr. 1967. “Corruption and Political Development: A Cost-Benefit Analysis.”

I do this out of a
sense of responsibility to the historical record, and so that readers will know
10
More ebooks visit: http://www.ccebook.cn ccebook-orginal english ebooks
This file was collected by ccebook.cn form the internet, the author keeps the copyright.
I NTRODUCTION
where I'm coming from. The book is therefore divided into halves: the first
half is my effort to retrace the arc of my learning curve, and the second half
is a more objective effort to use this as the foundation on which to erect a
conceptual framework for understanding the new global economy. Along
the way I explore critical elements of this emerging global environment: the
principles of governing it that arose out of the Enlightenment of the eighteenth century; the vast energy infrastructure that powers it; the global financial imbalances and dramatic shifts in world demographics that threaten
it; and, despite its unquestioned success, the chronic concern over the justice of the distribution of its rewards.

…

That is not an altogether illogical feeling, but, as is taught in Economics 101,
when a market economy periodically veers off a seemingly stable path, competitive responses act to rebalance it. Since millions of transactions are involved in the rebalancing, the process is very difficult to grasp. The abstractions
of the classroom can only hint at the dynamics that, for example, enabled the
U.S. economy to stabilize and grow after the September 11 attacks.
Economic populism imagines a more straightforward world, in which
a conceptual framework seems a distraction from evident and pressing
need. Its principles are simple. If there is unemployment, then the government should hire the unemployed. If money is tight and interest rates as a
consequence are high, the government should put a cap on rates or print
more money. If imported goods are threatening jobs, stop the imports. Why
are such responses any less reasonable than supposing that if you want a car
to start, you turn the ignition key?

The text discusses some fairly sophisticated topics not usually discussed in
introductory derivatives texts; for example, real-world electronic market
trading platforms such as CME’s Globex. On the theory side, there is a muchneeded and detailed discussion of what risk-neutral valuation really means in
the context of the dynamics of the hedge portfolio.
The text is a balanced, logical presentation of the major derivatives classes
including forward and futures contracts in Part 1, swaps in Part 2, and options
in Part 3. The material is uniﬁed by providing a modern conceptual framework
and exploiting the no-arbitrage relationships between the different derivatives
classes.
Some of the elements explained in detail in the text are:
•
•
•
•
•
•
•
Hedging, Basis Risk, Spreading, and Spread Basis Risk.
Financial Futures Contracts, their Underlying Instruments, Hedging and
Speculating.
OTC Markets and Swaps.
Option Strategies: Hedging and Speculating.
Risk-Neutral Valuation and the Binomial Option Pricing Model.

…

On the
theory side, a detailed discussion of what risk-neutral valuation really means
in the context of the dynamics of the hedge portfolio is provided for the simplest
option pricing model.
A balanced, logical presentation of the major derivatives classes is given.
This includes: Forward and futures contracts in Part 1; Swaps in Part 2; and
Options in Part 3. The material is uniﬁed by providing a modern conceptual
framework and exploiting the no-arbitrage relationships between the different
derivatives classes.
The goals of the text are to guide the reader through the derivatives markets;
to develop the reader’s skill sets needed in order to incorporate and manage
derivatives in a corporate or risk management setting; and to provide a solid
foundation for further study. This textbook is for students, both undergraduate
and graduate, as well as for those with an interest in how and why these markets
work and thrive.

A behaviorist running a rat through a maze would discuss the association between stimulus and response but would refuse to speculate in any way about the mind of the rat; now engineers were building mental models of rats out of a few electrical relays. They were not just prying open the black box; they were making their own. Signals were being transmitted, encoded, stored, and retrieved. Internal models of the external world were created and updated. Psychologists took note. From information theory and cybernetics, they received a set of useful metaphors and even a productive conceptual framework. Shannon’s rat could be seen not only as a very crude model of the brain but also as a theory of behavior. Suddenly psychologists were free to talk about plans, algorithms, syntactic rules. They could investigate not just how living creatures react to the outside world but how they represent it to themselves.
Shannon’s formulation of information theory seemed to invite researchers to look in a direction that he himself had not intended.

…

But there had been papers on information theory, life, and topology; information theory and the physics of tissue damage; and clerical systems; and psychopharmacology; and geophysical data interpretation; and crystal structure; and melody. Elias, whose father had worked for Edison as an engineer, was himself a serious specialist—a major contributor to coding theory. He mistrusted the softer, easier, platitudinous work flooding across disciplinary boundaries. The typical paper, he said, “discusses the surprisingly close relationship between the vocabulary and conceptual framework of information theory and that of psychology (or genetics, or linguistics, or psychiatry, or business organization).… The concepts of structure, pattern, entropy, noise, transmitter, receiver, and code are (when properly interpreted) central to both.” He declared this to be larceny. “Having placed the discipline of psychology for the first time on a sound scientific basis, the author modestly leaves the filling in of the outline to the psychologists.”

Over the long term, intellective mastery will depend upon being able to develop a tacit knowledge that facilitates the recognition of decision alternatives and frees the mind for the kind of insight that could result in innovation and improvement. Such tacit recognition depends upon first being able to explicitly construct the significance of patterns and relationships in the data. Such meanings cannot be achieved without a level of intellective skill development that allows the worker to solve the problem of reference, engage in reasoning that is both inductive and deductive, and apply a conceptual framework to the information at hand. Meaning must be constructed explicitly in order to become implicit later. Intellective skill is necessary for the creation of meaning, and real mas- tery begins to emerge when such meanings are consolidated in tacit
Mastering The Electronic Text
193
knowledge. While the development of mastery in the action medium does not require extensive explication, mastery in the symbolic medium depends upon explicitly constructed meaning, and intellective skill is the means by which this is achieved.

Kant tried to forge a synthesis of empiricism and rationalism which, in rough outline, works well in today’s nature-nurture debate. The mind is not a mere associator of sensory impressions (as in the empiricism of his day and the connectionism of ours), nor does it come equipped with actual knowledge about the contents of the world (as in some versions of the rationalism of his day and in the Extreme Nativism of ours). What the innate apparatus of the mind contributes is a set of abstract conceptual frameworks that organize our experience—space, time, substance, causation, number, and logic (today we might add other domains like living things, other minds, and language). But each of these is an empty form that must be filled in by actual instances provided by the senses or the imagination. As Kant put it, his treatise “admits absolutely no divinely implanted or innate representations. . . . There must, however, be a ground in the subject which makes it possible for these representations to originate in this and no other manner. . . .

…

Metaphorical connections saturate our language, drive our science, enliven our literature, burst out (at least occasionally) in children’s speech, and remind us of things past. On the other hand, when experimentalists lead the horse to water, they can’t make it drink.
One factor is simply expertise. Tea ceremonies, radiation treatments, and invading armies are obscure to most students, so they don’t have the needed conceptual framework at their fingertips. Subsequent studies have shown that expertise in a topic can make deep analogies come more easily. For example, when students who had taken a single physics course were shown a bunch of problems and asked which ones were similar, they lumped together the ones that had pictures of the same kinds of objects—the inclined planes in one group, the pulleys in another, and so on.

I have listed some of them in the endnotes for readers who want to go deeper. My aim here is to tie together the broader set of overlapping, complex issues in a way that makes sense to an informed reader who does not have special Internet-related expertise—beyond simply being an Internet and cell phone user. For people who are experts on some of these issues, I have tried to provide a fresh conceptual framework and geopolitical context, which I hope will be useful to experts and nonexperts alike who are concerned about the future of freedom in the Internet age.
It is not possible to document in one concise book all the violations of Internet freedoms and rights happening everywhere in the world. If your rights to digital free expression and assembly are under attack but your country is not mentioned in this book, please understand that the omission does not imply a lack of concern for the violations you and your compatriots are enduring.

One almost got the impression they were reading off an internal scorecard. The less avid fans remembered fewer important facts about the game and were more likely to recount superficial details like the weather. Because they lacked a detailed internal representation of the game, they couldn’t process the information they were taking in. They didn’t know what was important and what was trivial. They couldn’t remember what mattered. Without a conceptual framework in which to embed what they were learning, they were effectively amnesics.
Could any less be said of those two thirds of American teens who don’t have a clue when the Civil War occurred? Or the 20 percent who don’t know who the United States fought against in World War II? Or the 44 percent who think that the subject of The Scarlet Letter was either a witch trial or a piece of correspondence?

Matteo Carandini cautions us that it is too much to expect to be able to bridge directly from neurophysiology to behavior, and how computation might help fill the gap. Leah Krubitzer reminds us of the risks in assuming that science can be accomplished on a timetable, and Arthur Caplan highlights the practical and ethical concerns, and consequences, of a brain mapping project, including how to fund it, what to do with the data, and how to decide when we’ve succeeded. Finally, Gary Marcus argues that current conceptual frameworks for understanding complex cognition and behavior are impoverished, and that in order to make progress the field of neuroscience must significantly broaden its search for computational principles.
Plate 1. a. Allen Reference Atlas plate for a sagittal section (i.e., front to back) of the mouse brain. b. In situ hybridization image of a calcium-binding gene (Calb 1), showing expression in the cortex (top layer of b), striatum (left center), hippocampus (curved shape below cortex), and cerebellum layer (top right in layer).

Scholars across a range of disciplines
have documented the rise and dominance of an approach to governance,
across a variety of scales, informed by key principles of contemporary
neoliberalism, including, most notably, the preeminence of the “free market” in allocating goods and services, the retreat or reconfiguration of the
state to accommodate the requirements of transnational market forces, and
an emphasis on policies promoting and protecting free trade, foreign direct
investment, and private property rights.21 The following section discusses
why the urban scale is of particular importance for understanding this project, the politics of urban security governance that are integral to defining it,
and the specific role that policing, crime, and the criminal play in its execution; doing so will provide the necessary theoretical and conceptual framework for the subsequent discussion of Cape Town.
Neoliberal principles of economic reform originally came to prominence through their application at the national level in the global South.
Although change was already afoot in the nations and cities of the global
North as well, it was the structural adjustment programs of the World
Bank and the International Monetary Fund in the 1980s and 1990s,
and the intimately related prescriptions of the Washington Consensus,
that first drew attention and notoriety to the ascent of neoliberalism as
a global governance force.22 The requirements for installing this new
governance regime were substantial, and their implementation often
Introductionâ•‡ ·â•‡ 11
necessitated significant restructuring of the state and the strict management of often intense political resistance to all or part of the project.

Bill investigated the refactorings that would be useful
for C++ framework development and researched the necessary semantics-preserving
refactorings, how to prove they were semantics preserving, and how a tool could implement these
ideas. Bill's doctoral thesis [Opdyke] is the most substantial work on refactoring to date. He also
contributes Chapter 13 to this book.
I remember meeting Bill at the OOPSLA conference in 1992. We sat in a café and discussed
some of the work I'd done in building a conceptual framework for healthcare. Bill told me about
his research, and I remember thinking, "Interesting, but not really that important." Boy was I
wrong!
John Brant and Don Roberts have taken the tool ideas in refactoring much further to produce the
Refactoring Browser, a refactoring tool for Smalltalk. They contribute Chapter 14 to this book,
which further describes refactoring tools.
And me? I'd always been inclined to clean code, but I'd never considered it to be that important.

"But they couldn't have avoided making a few assumptions about the way we'd think, and the kind of technology we'd be using-and some of those assumptions are hound to be wrong. I can easily imagine a space-faring civilization that wouldn't have tried the neutron phase experiment in a million years. So maybe the meaning of the rest of the data will be inaccessible to us ... but if it is, that won't be out of malice, and it won't be because their whole conceptual framework was beyond our comprehension. It will just be sheer bad luck."
Paulo gave up his smirk of tolerant amusement, as if reluctantly conceding that this was an appealing vision of the Transmuters, however naive. Yatima seized the• moment.
"And whatever you think about the map yourself, just remember that Orlando can't dismiss it the way you can. Everything about this drags him back to Lacerta."

Most pollution has been eliminated from the smoke stacks and outflow pipes of factories in the rich world, and leading firms are pushing successfully for ever higher eco-efficiency.
These apparent successes made it difficult to talk about problems of overshoot around 1990. The difficulty was increased by the lack of basic data and even elementary vocabulary related to overshoot. It took more than two decades before the conceptual framework-for example, distinguishing growth in the Gross Domestic Product (GDP) from growth in the ecological footprint-matured sufficiently to enable an intelligent conversation about the limits to growth issue. And world society is still trying to comprehend the concept of sustainability, a term that remains ambiguous and widely abused even sixteen years after the Brundtland Commission coined it.'

The experimental tinkering of games—a parallel universe where rules and conventions are constantly being reinvented—creates a new supply of metaphors that can then be mapped onto more serious matters. (Think how reliant everyday speech is on metaphors generated from games: we “raise the stakes”; we “advance the ball”; we worry about “wild cards”; and so on.) Every now and then, one of those metaphors turns out to be uniquely suited to a new situation that requires a new conceptual framework, a new way of imagining. A top-down state could be described as a body or a building—with heads or cornerstones—but a state governed by contractual interdependence needed a different kind of metaphor to make it intelligible. The runaway success of The Game of Chess—first as a sermon, then as a manuscript, and finally as a book—suggests just how valuable that metaphor turned out to be.
We commonly think of chess as the most intellectual of games, but in a way its greatest claim to fame may be its allegorical power.

State-building, done (as Sutton indicates in his chapter) without regard for the democratic legitimacy of the governments involved, implicated foreign donors in the human rights abuses of recipients
and failed to prevent coups, revolutions, and wars that led to political
breakdown. Pakistan, an early target of foreign development efforts, is a
prime example. Economic planning fell out of favor intellectually with the
Reagan-Thatcher revolution in the late 1980s and was replaced by orthodox
• 5 •
•
Francis Fukuyama
economic liberalism as the dominant conceptual framework. But most
importantly, none of the approaches popular in any given decade proved
adequate to promote sustained long-term growth in countries with weak
institutions or where local elites were uninterested or incapable of managing the development process themselves. The record is particularly horrendous in the world’s poorest region, sub-Saharan Africa, many of whose
countries have experienced negative per capita growth in gross domestic
product (GDP) and regression in institutional development, even though
some 10 percent of the entire region’s GNP is provided by outside donors.7
Where sustained economic growth did occur, particularly in East Asia, it
tended to come about under the leadership of domestic elites and not as a
result of the efforts of foreign donors, lenders, or allies.8
To the extent that there has been intellectual progress in this area, it
lies in an appreciation for the complexity and multidimensionality of the
development problem.

Instead, they worked on building a brain-inspired machine that was “capable of carrying out complex creative activities,” continuously seeking to reveal the “higher intellect” of machines modeled after mechanisms of the mind.18 If there was a danger in the brain and machine metaphor, it ran only one way for Glushkov: “the danger is not that machines will begin to think like people,” he intoned, “but that people will begin to think like machines.”19
Are National Networks More Like Brains or Nervous Systems?
In 1962, Glushkov imagined the OGAS as a “brainlike” (mozgopodnobyi) network for managing the national economy and extending the life experience of the nation and its inhabitants. Consider the implications for the cybernetic analog between neural networks and national computer networks. As already noted, cybernetics brings to bear powerful conceptual frameworks for imagining structural analogies between ontologically different information systems—organisms, machines, societies, and others. The cybernetic instinct rushes many visionaries to profound structural insights but also to overly determined design decisions. The circuitry of a computer chip and the neural networks of a mind do not resemble each other, although cybernetics earns its keep by finding usable analogs between them.

Our information is narrowed to only what the telescope provides. If we don't experience a wider informa- tion field, we lose knowledge of that field's existence. We become the hermit in the cave who knows only what the TV offers. We experience what is, not knowing what isn't. The people who control television become the choreog- raphers of our internal awareness. We give way to their pro- cess of choosing information. We live within their conceptual frameworks. We travel to places on the planet which they choose and to situations which they decide we should see. What we can know is narrowed to what they know, and then narrowed further to what they select to send us through this instrument of theirs.
The kind of people who control television is certainly a problem. But this is only the beginning. While our field of knowledge is constrained by their venality and arrogance, the people who run television are constrained by the instrument itself.

In a chess tournament, a few highly skilled players will win most of the games by exploiting the mistakes of weaker players. Much like chess, it seems only reasonable to expect a few highly skilled market participants to interpret the same information—the current position of the market chessboard, so to speak—differently from the majority, and reach variant conclusions about the probable market direction. In this conceptual framework, mistakes by a majority of less skilled market participants can drive prices to incorrect levels (that is, prices out of line with the unknown equilibrium level), creating opportunities for more skilled traders. Quite simply, equal dissemination of knowledge does not imply equal use of knowledge.
Since all market participants pay commissions and are subject to slippage, the majority of participants are doomed to below-average results.

It’s been drilled into my head ever since I was old enough to play One of These Things Just Doesn’t Belong and my father made me point out the token white guy in the Lakers team photo. Mark Landsberger, where are you when I need you? “The distinguishing feature of Stage II blackness is a heightened awareness of race. Here race is still all-consuming, but in a more positive fashion. Blackness becomes an essential component in one’s experiential and conceptual framework. Blackness is idealized, whiteness reviled. Emotions range from bitterness, anger, and self-destruction to waves of pro-Black euphoria and ideas of Black supremacy…” To avoid detection I go under the table, but the joint’s not hitting right. I can’t get any intake. From my newfound hiding place I struggle to keep the ember burning, while catching odd-angled glimpses of photographs of Foy Cheshire, Jesse Jackson, Sojourner Truth, Moms Mabley, Kim Kardashian, and my father.

Thus correlations do not supersede causation, but rather should form the basis for
additional research to establish if such correlations are indicative of causation. Only then can we get
a sense as to how meaningful are the causes of the correlation.
While the idea that data can speak for themselves free of bias or framing may seem like an attractive
one, the reality is somewhat different. As Gould (1981: 166) notes, ‘inanimate data can never speak
for themselves, and we always bring to bear some conceptual framework, either intuitive and illformed, or tightly and formally structured, to the task of investigation, analysis, and interpretation’.
Making sense of data is always framed; examined through a particular lens that casts how it is
interpreted. Even if the process is automated in some way, the algorithms used to process the data are
imbued with particular values and contextualised within a particular scientific approach.

The military and political success of ISIS, and the ineptitude of Western powers in constructing a multi-religious Iraq have planted the seeds of yet another endless war in the most unstable and strategically decisive region of the planet. The investigation presented in this book stops at the threshold of understanding this barbaric confrontation, as it would require a different set of information and a different conceptual framework.
I would simply add that the inability of authentic social movements to overcome the violence of the state, and their subsequent attempt to engage in the same kind of violence usually end up in the destruction of the social movement, and in justifying additional violence. Under such conditions, the actors, state or non-state, able to implement the highest level of violence are the winners, while people at large are the dramatic losers under all circumstances.

“True success in strategic issues involves manipulating a situation so effectively that the outcome is inevitably in favor of Chinese interests. This emerges from the oldest Chinese strategic thinker, Sun Zi, who argued that ‘every battle is won or lost before it is ever fought.’”18
The United States understands how to handle a traditional military-political advance. After all, this was the nature of the Soviet threat and the Nazi rise to power. The United States has a conceptual framework as well as the tools—weapons, aid packages, alliances—with which to confront such an advance. Were China to push its weight around, anger its neighbors, and frighten the world, Washington would be able to respond with a set of effective policies that would take advantage of the natural balancing process by which Japan, India, Australia, and Vietnam—and perhaps others—would come together to limit China’s emerging power.

Broadly speaking, in tech circles, open systems—like the Internet itself—are always good, while closed systems—like the classic broadcast model—are bad. Open is Google and Wi-Fi, decentralization and entrepreneurialism, the United States and Wikipedia. Closed equals Hollywood and cable television, central planning and entrenched industry, China and the Encyclopaedia Britannica. However imprecisely the terms are applied, the dichotomy of open versus closed (sometimes presented as freedom versus control) provides the conceptual framework that increasingly underpins much of the current thinking about technology, media, and culture.
The fetish for openness can be traced back to the foundational myths of the Internet as a wild, uncontrollable realm. In 1996 John Perry Barlow, the former Grateful Dead lyricist and cattle ranger turned techno-utopian firebrand, released an influential manifesto, “A Declaration of the Independence of Cyberspace,” from Davos, Switzerland, during the World Economic Forum, the annual meeting of the world’s business elite.

But with efficiency wages employers are doing exactly that. They are paying more for their labor than needed merely to get their labor force to show up for work. In contrast it would make no sense to have a theory of the stock market or the wheat market in which buyers do not want to pay less for what they buy.
The efficiency wage theory further contradicts economists’ theoretical intuitions because it violates their usual conceptual framework of how to set up a theoretical problem. The usual methodology of economics is to ask questions, first on the demand side of the market and then on the supply side. According to this protocol, regarding the purchase of labor one would first ask: who are the potential employers? And then, at any given wage, how much labor would they want? The answers to these queries yield the demand for labor.

Hacker, ‘Military institutions, Weapons, and Social Change: Toward a New History of Military Technology’, Technology and Culture, Vol. 35 (1994), pp. 768–834.
2. J. F. C. Fuller, Armament and History (New York: Scribners, 1945).
3. Van Creveld, for example, is clear that there are differences: ‘since technology and war operate on a logic which is not only different but actually opposed, the conceptual framework that is useful, even vital, for dealing with the one should not be allowed to interfere with the other’. Martin Van Creveld, Technology and War: from 2000 BC to the Present (London: Brassey’s, 1991), p. 320.
4. Bernard Davy, Air Power and Civilisation (London: Allen & Unwin, 1941), p. 116.
5. Ibid., p. 148.
6. H. G. Wells, A Short History of the World (Harmondsworth: Penguin, 1946), p. 308.
7.

The daily decisions he made on Wall Street were based on probability: “Success came by evaluating all the information available to try to judge the odds of various outcomes and the possible gains or losses associated with each.”15
This creation of a thought construct is reminiscent of George Soros’s theory of reflexivity, developed while he was a student at the London School of Economics. Within this conceptual framework, which Soros credits for much of his success, he focuses on the relationship between thinking and reality.
Klaus Schwab pioneered the stakeholder principle, “according to which the management of an enterprise is not only accountable to its shareholders, but must also serve the interests of all stakeholders . . . who may be affected or concerned by its operations.” He later built on this theory to create the concept of “global corporate citizenship.”16 The stakeholder principle is the ideological foundation upon which the WEF is built and gives it legitimacy.

pages: 293words: 88,490

The End of Theory: Financial Crises, the Failure of Economics, and the Sweep of Human Interaction
by
Richard Bookstaber

Hilbert argued for a large-scale mathematical initiative to devise mechanical procedures that they could follow by rote, that in the end could lead to the proof of any mathematical proposition, giving a decided yes or no to the question. Of course, if the process is mechanical, why not develop a machine to do it—a “computing machine” that does the same rote, mechanical tasks that the human computers can do?6
This is what Turing set himself to do. He developed a conceptual framework for a computer that could take in any set of instructions, execute them faithfully, and deliver the result. Turing’s colleague Alonzo Church (who, independently of Turing, had also demonstrated the impossibility of Hilbert’s program) called this the Turing machine. Turing went a step further, and added to the machine a set of instructions that were internal to the machine and were not altered over the course of execution.

In each case, one must wonder if the implementation language was well chosen, if the
design method was well chosen, or if the designer had failed to adapt to the tool in hand.
There is nothing unusual or shameful in such a mismatch. It is simply a mismatch that delivers
sub-optimal designs and imposes unnecessary burdens on programmers. It does the same to
designers when the conceptual framework of the design method is noticeably poorer than C++’s
conceptual framework. Therefore, we avoid such mismatches wherever possible.
The following discussion is phrased as answers to objections because that is the way it often
occurs in real life.
24.2.1 Ignoring Classes [lang.ignore.class]
Consider design that ignores classes. The resulting C++ program will be roughly equivalent to the
C program that would have resulted from the same design process – and this program would again
be roughly equivalent to the COBOL program that would have resulted from the same design process.

…

These are then refined
repeatedly (§23.4.3.5) to reach a set of class relationships that are sufficiently general, flexible, and
stable to be of real help in the further evolution of a system.
The best tool for finding initial key concepts/classes is a blackboard. The best method for their
initial refinement is discussions with experts in the application domain and a couple of friends.
Discussion is necessary to develop a viable initial vocabulary and conceptual framework. Few people can do that alone. One way to evolve a set of useful classes from an initial set of candidates is
to simulate a system, with designers taking the roles of classes. This brings the inevitable absurdities of the initial ideas out into the open, stimulates discussion of alternatives, and creates a shared
understanding of the evolving design. This activity can be supported by and documented by notes
on index cards.

For this odd Assertion, I find to be contradicted by frequent practice of Diamond Cutters: And particularly having enquir’d of one of them, to whom abundance of those Gems are brought to be fitted for the Jeweller and Goldsmith, he assur’d me, That he makes much of his Powder to Polish Diamonds with, only, by beating board Diamonds (as they call them) in a Steel or Iron Morter, and that he has that way made with ease, some hundreds of Carrats of Diamond Dust.73
The assertion that goat’s blood softens diamonds seemed odd to Boyle, because he had escaped from the old conceptual framework of sympathy and antipathy, according to which there was a natural sympathy between the lodestone and goat’s blood and a natural antipathy between the diamond and goat’s blood. But all that was required to abolish this conceptual scheme was an insistence on direct as opposed to indirect experience.xi
The result of such an approach, which looks to us just like common sense but was revolutionary at the time, was a transformation in the reliability of knowledge.74 William Wotton, in his Reflections upon Ancient and Modern Learning (1694), put it like this:
Nullius in verba [‘Take no man’s word for it’, i.e. defer to no authority]xii is not only the motto of the ROYAL SOCIETY, but a received Principle among all the Philosophers of the present Age.

…

Science offers reliable knowledge (that is, reliable prediction and control), not truth.29
One day we may discover that some of our most cherished forms of knowledge are as obsolete as epicycles, phlogiston, caloric, the electromagnetic aether and, indeed, Newtonian physics. But it seems virtually certain that future scientists will still be talking about facts and theories, experiments and hypotheses. This conceptual framework has proved remarkably stable, even while the scientific knowledge it is used to describe and justify has changed beyond all recognition. Just as any progressive knowledge of natural processes would need a concept akin to ‘discovery’, so as further advances occurred it would need a way of representing knowledge as both reliable and defeasible: terms that do the work done by ‘facts’, ‘theories’ and ‘hypotheses’ would have to play a role in any mature scientific enterprise.

I first tried to find close relevance within es- tablished disciplines [such as artificial intelligence,] but in each case I found that the people I would talk with would immediately translate my admittedly strange (for the times) statements of purpose and possibility into their own discipline's framework."9 At the 1960 meeting of the American Documentation Institute, a talk he gave was greeted with yawns, and his proposed augmentation environ- ment was dismissed as just another information-retrieval system. No, Engelbart realized, if his augmentation ideas were ever going to fly, he would have to create a new discipline from scratch. And to do that, he would have to give this new discipline a conceptual framework all its own-a manifesto that would layout his thinking in the most compelling way possible. Creating that manifesto took him the better part of two years. "Augmenting the Human Intellect: A Conceptual Framework" wouldn't be completed until October 1962. But Engelbart was nothing if not dogged. "By 'augmenting man's intellect,' " he wrote, struggling to articulate his own gut feelings, "we mean in- creasing the capability of a man to approach a complex problem situation, to gain comprehension to suit his particular needs, and to derive solutions to prob- lems. . . .

Davidson, Morgan Partner; Paul Warburg, Kuhn Loeb partner; Frank A. Vanderlip, vice-president of Rockefeller’s National City Bank of New York; Charles D. Norton, president of Morgan’s First National Bank of New York; and Professor A. Piatt Andrew, head of the NMC staff, who had recently been made an Assistant Secretary of the Treasury under Taft, and who was a technician with a foot in both the Rockefeller and Morgan camps.31
But of course the conceptual framework for the central bank already was in place long before the Jekyll Island meeting. The architect of that design, Lawrence Laughlin, became the most visible and credible national advocate of the Federal Reserve proposal. Laughlin testified before the Congress and made statements in favor of the proposal put forward by the National Monetary Commission. His brainchild not only centralized and rationalized the nation’s currency system, but Laughlin’s creation had the additional benefit of removing the machinations of the private bankers from public view.

Leda Cosmides and John Tooby, Co-Directors of the Center for Evolutionary Psychology at the University of California, Santa Barbara, have summarized the field this way in a 1994 descriptive brochure:
Evolutionary psychology is based on the recognition that the human brain consists of a large collection of functionally specialized computational devices that evolved to solve the adaptive problems regularly encountered by our hunter-gatherer ancestors. Because humans share a universal evolved architecture, all ordinary individuals reliably develop a distinctively human set of preferences, motives, shared conceptual frameworks, emotion programs, content-specific reasoning procedures, and specialized interpretation systems— programs that operate beneath the surface of expressed cultural variability, and whose designs constitute a precise definition of human nature.
In his new book, How the Mind Works (W. W. Norton, 1997), Steven Pinker describes these specialized computational devices as "mental modules." The "module" is a metaphor, and is not necessarily located in a single spot in the brain, and should not be confused with the nineteenth century notion of phrenologists who allocated specific bumps on the head for specific brain functions.

In a sense, p#two and all of its children share a z-index of 7, while having their own mini-z-index within the context of p#two.
Put another way, it's as though the b element has a z-index of 7,36 while the em's value is 7,-42. These are merely implied conceptual values; they don't conform to anything in the specification. However, such a system helps to illustrate how the overall stacking order is determined. Consider:
p#one 10 p#one b 10,-404 p#two b 7,36 p#two 7 p#two em 7,-42 p#three b 1,23 p#three 1
This conceptual framework precisely describes the order in which these elements would be stacked. While the descendants of an element can be above or below that element in the stacking order, they are all grouped together with their ancestor.
It is also the case that an element that establishes a stacking context for its descendants is placed at the 0 position of that context's z-axis. Thus, you could extend the framework as follows:
p#one 10,0 p#one b 10,-404 p#two b 7,36 p#two 7,0 p#two em 7,-42 p#three b 1,23 p#three 1,0
There remains one more value to examine.

It is a very partial version of the truth (the British suffered many more casualties in the operation). But it was enough. The empire that emerged from the war was a much less top-down association. It was soon time to send for the now septuagenarian Arthur Balfour, who chaired a committee which agreed that the white Dominions of the empire could in future be as independent as they chose.
At the end of the war, the whole conceptual framework of empire looked shaky. Empire had become an official project and the awful loss of life had done nothing at all to enhance belief in the wisdom of government. (The great celebrant of empire, Rudyard Kipling, had lost his own son at the battle of Loos in September 1915: just turned eighteen when he was last seen staggering through the mud, half his face hanging off.) The emerging force in British politics, the Labour party, was more interested in improving living conditions at home than in the country’s possessions abroad.

Even such shrewd observers of modern politics as Zbigniew Brzezinski and Carl Friedrich told readers of their 1965 classic, Totalitarian Dictatorship and Autocracy, to forget institutions altogether: “The reader may wonder why we do not discuss the ‘structure of government,’ or perhaps ‘the constitution’ of these totalitarian systems. The reason is that these structures are of very little importance.”
Such rigid conceptual frameworks may have helped in understanding Stalinism, but this is too simplistic of a perspective to explain much of what is going on inside today’s authoritarian states, which are busy organizing elections, setting up parliaments, and propping up their judiciaries. If authoritarian regimes are bold enough to allow elections, for reasons of their own, what makes us think they wouldn’t also allow blogs for reasons that Western analysts may not be able to understand yet?

Update
The update method of the ContentProvider class is analogous to the REST UPDATE operation. It replaces records in the database with updated records.
Delete
The delete method of the ContentProvider class is analogous to the REST DELETE operation. It removes matching records from the database.
Tip
REST stands for “Representational State Transfer.” It isn’t a formal protocol the way that HTTP is. It is more of a conceptual framework for using HTTP as a basis for easy access to data. While REST implementations may differ, they all strive for simplicity. Android’s content provider API formalizes REST-like operations into an API and is designed in the spirit of REST’s simplicity. You can find more information on REST on Wikipedia: http://en.wikipedia.org/wiki/REST.
Content provider components are the heart of the Android content model: by providing a ContentProvider, your application can share data with other applications and manage the data model of an application.

Update
The update method of the ContentProvider class is analogous to the REST UPDATE operation. It replaces records in the database with updated records.
Delete
The delete method of the ContentProvider class is analogous to the REST DELETE operation. It removes matching records from the database.
Tip
REST stands for “Representational State Transfer.” It isn’t a formal protocol the way that HTTP is. It is more of a conceptual framework for using HTTP as a basis for easy access to data. While REST implementations may differ, they all strive for simplicity. Android’s content provider API formalizes REST-like operations into an API and is designed in the spirit of REST’s simplicity. You can find more information on REST on Wikipedia: http://en.wikipedia.org/wiki/REST.
Content provider components are the heart of the Android content model: by providing a ContentProvider, your application can share data with other applications and manage the data model of an application.

Schivelbusch’s assessment of train travel is an instructive point of comparison here because he is explicitly concerned with the effect of transportation technologies on the collective experience of space and time. according to his thesis, the high speed of train travel fundamentally distorted passengers’ perception of the natural environment by inhibiting their ability to take in the details of their surroundings. at the same time, the train cultivated a new perspective that he calls the panoramic view: one that frames the landscape as an object to be aesthetically contemplated and appreciated, much like a painting.167 This techno-aesthetic practice encouraged the objectification of the landscape and contributed to the development of a broader industrialized consciousness that frames space and time in the abstract.168 Bicycling did not create the same subject/object split as the passenger train, but like train travel it facilitated a unique way of seeing that was both a literal vantage point as well as a conceptual framework that gave meaning to one’s point of view, and by extension, one’s form of mobility. an aptly titled 1895 magazine piece called “Some Thoughts on landscape” hints at some of the ways in which bicycling constructs the landscape as both a subject of auto-mobile desire and an object of the bicyclist’s gaze:
Our high pressure, our covetous greed of the minute, have placed the bicycle upon the road in its thousands; and out of evil there has in this way come good, for it is to the green country that the fevered youth of the nation race, with rustling rubber and sharp-sounding bell. as they rush through the air and flash past the village and field,
there is borne in upon them the educational germ of a love for land
scape; they see, and they cannot help noting, the contrast between
smoke-grimed cities and “fresh woods and pastures new.”169
Here one can see bicycling contributing to a specifically aesthetic conceptualization of nature; one that resonated with a population that sought to preserve landscapes through photography. indeed, several companies produced and marketed cameras explicitly for cyclists, including models that could be mounted directly to the bicycle itself.

My account of Douglas Engelbart’s work draws on readings from his work collected at the Bootstrap Institute Web site at http://www.bootstrap.org/, as well as the accounts in Thierry Bardini, Bootstrapping (Stanford University Press, 2000); Howard Rheingold, Tools for Thought (Simon & Schuster, 1985); and John Markoff, What the Dormouse Said (Viking, 2005).
The video of Engelbart’s 1968 demo is at http://sloan.stanford.edu/mousesite/1968Demo.htm.
“store ideas, study them”: From the Invisible Revolution Web site, devoted to Engelbart’s ideas, at http://www.invisiblerevolution.net/nls.htm.
“successful achievements can be utilized”: From the “Whom to Augment First?” section of Engelbart’s 1962 paper, “Augmenting Human Intellect: A Conceptual Framework,” at http://sloan.stanford.edu/mousesite/EngelbartPapers
/B5_F18_ConceptFrameworkPt4.htm.
“an improving of the improvement process”: Bootstrap Institute home page at http://www.bootstrap.org/.
“the feeding back of positive research”: In the “Basic Regenerative Feature” section of Engelbart’s 1962 paper, “Augmenting Human Intellect,” at http:// sloan.stanford.edu/mousesite/EngelbartPapers/
B5_F18_ConceptFramework Pt4.htm.

In the winter of 2009, Lin splurged on a trip to Thailand for himself and his mother. Before he flew, he visited a bookstore in Chengdu and happened on the memoir of a monk. He was not prepared for the effect it had on him. “I found Buddha to be an inspiration. He invited me to think bravely about this world,” Lin said. “Buddha could challenge any social norms, such as the caste system of India,” he went on. “He rethought the conceptual framework he had from day one.”
Lin spent the trip to Thailand in his hotel room, absorbed in the book—“I never even went to the pool”—and when he returned, he began frequenting a Buddhist institute near his apartment in Beijing. His moment of transformation came when he grasped the idea, as he put it, that “this world is a fantasy.” He found it impossible to imagine going back to the work he had before.

Sequencing the genome of single cells has made it clear that we’re all mosaics.16,17 For example, researchers at the Salk Institute did single-cell sequencing of brain cells from individuals who had died and found striking differences from one cell to the next.17 Part of this mosaicism is explained by so-called de novo mutations, which occur in cells when they divide over the course of one’s life. We’ve also learned about the remarkable extent of heterogeneity that exists from one cancer cell to another. So moving from the conceptual framework of sequencing an individual’s DNA to that of a cell has already taught us some invaluable, disease-relevant lessons.
There are important limitations to acknowledge about sequencing. When a person undergoes sequencing (some are now calling this getting “genomed”) there will typically be about 3.5 million variant bases compared with the human reference genome. But, as we discussed with BRCA, Myriad Genetics, and the Supreme Court ruling, most will be variants of unknown significance (VUS).

He became a successful economic consultant, advising many big corporations, including Alcoa, J.P. Morgan, and U.S. Steel. In 1968, he advised Richard Nixon during his successful run for the presidency, and under Gerald Ford he acted as chairman of the White House Council of Economic Advisers. In 1987, he returned to Washington, this time permanently, to head the Fed and personify the triumph of free market economics.
Now Greenspan was on the defensive. An ideology is just a conceptual framework for dealing with reality, he said to Waxman. “To exist, you need an ideology. The question is whether it is accurate or not. What I am saying to you is, yes, I found a flaw. I don’t know how significant or permanent it is, but I have been very distressed by that fact.” Waxman interrupted him. “You found a flaw?” he demanded. Greenspan nodded. “I found a flaw in the model that I perceived as the critical functioning structure that defines how the world works, so to speak,” he said.

Journalists, academics, and the like have, of course, penned hundreds of articles and op-eds on the origins of the crisis and the responses to it—including a few by yours truly. But mass media outlets require such brevity that anything remotely resembling a comprehensive explanation of something as complex as the financial crisis is out of the question. Twelve seconds of TV time constitutes a journalistic essay.
While this book tells the story in what I hope is an intelligible manner, its more important goal is to provide a conceptual framework through which both the salient facts and the litany of policy responses can be understood. More concretely, I want to provide answers to the following three critical questions:
How Did We Ever Get into Such a Mess?
The objective here is not to affix blame, though some of that will inevitably (and deservedly) be done, but rather to highlight and analyze the many mistakes that were made so we don’t repeat them again.

There’s evidence of this being proposed among the ancient Greeks and in India and Islamic cultures during the European Middle Ages. I think this question shows the extent to which a major paradigm shift depends on more than just some additional empirical data and more than just a brilliant new theory using a new concept. It really depends on a much larger context so that the seed of a potentially powerful idea falls on a whole different soil, out of which this organism, this new conceptual framework, can grow—literally a “conception” in a new cultural and historical womb or matrix.
Richard Tarnas and Dean Radin, “The Timing of Paradigm Shifts,” Noetic Now, January 2012.
15 In the corporate sector, worker cooperatives have failed to achieve any meaningful traction. The ones that prevail are often run on practices that are a combination of Orange and Green. One often-cited success story is Mondragon, a conglomerate of cooperatives based in a Basque town of the same name in Spain (around 250 companies, employing roughly 100,000 people, with a turnover of around €15 billion).

I have applied RCT methodology in my own research. I believe that packaged interventions should be verified by RCTs more often. I’m on the board of JPAL’s sister organization, Innovations for Poverty Action, a nonprofit that also runs RCTs and whose work I deeply respect and support.
But if RCTs are an essential tool of decision-making for social policy, they still need to be placed within a larger conceptual framework. If a world-renowned economist and careful experimentalist such as Duflo can’t avoid the pathologies of packaged interventions, it’s not clear that other researchers running RCTs can either. A single methodology cannot be the sole paradigm for determining what’s right for social change. The problem is not RCTs themselves as much as careless interpretation of their results.19 An RCT is just one good tool in the toolbox of program evaluation.

For example, if you want to quit smoking, you could write a large check to someone you see often with permission to cash the check if you are seen smoking. Or you can make that bet with yourself, what Ainslie calls a “private side bet.” You could say to yourself, “I won’t watch the game on television tonight until I finish [some task you are tempted to postpone].”
Armed with the insights of Strotz, Mischel, and Ainslie, I set out to create a conceptual framework to discuss these problems that economists would still recognize as being economics. The crucial theoretical question I wanted to answer was this: if I know I am going to change my mind about my preferences (I will not limit myself to a few more cashew nuts, as I intend, rather I will eat the entire bowl), when and why would I take some action to restrict my future choices?
We all have occasions on which we change our minds, but usually we do not go to extraordinary steps to prevent ourselves from deviating from the original plan.

Despite its limitations, our experience in introducing this model and earlier versions of it across the professions has been encouraging, and seems to capture the substance and direction of change as well as the choices that the professions are facing. Our hope for the model is that it helps professionals explain and predict the developments they are witnessing within their own fields, and that it offers a common vocabulary and conceptual framework for comparative analysis across different professional disciplines.
There is one further characteristic of the model that should be borne in mind from the outset. We are not suggesting, for any particular piece of professional work—for instance, the treatment of a patient, the resolution of a dispute for a legal client, the teaching of a class, the auditing of a company’s accounts, the investigation of events or the reporting of a story for a reader—that the challenge for professionals is to determine into which of our boxes their work sits.

GEORGE SOROS’S THEORY OF BOOM/BUST CYCLES AND REFLEXIVITY
George Soros is one of the most successful investors of all times. In addition to being a successful investor, he is a philanthropist, opinion maker, and philosopher. Soros has developed a theory of boom/bust cycles and reflexivity, as he describes in the following excerpt from a recent lecture.5
Let me state the two cardinal principles of my conceptual framework as it applies to the financial markets. First, market prices always distort the underlying fundamentals. The degree of distortion may range from the negligible to the significant. This is in direct contradiction to the efficient market hypothesis, which maintains that market prices accurately reflect all the available information. Second, instead of playing a purely passive role in reflecting an underlying reality, financial markets also have an active role: they can affect the so-called fundamentals they are supposed to reflect.

201
Expected Decision Costs
Percentage of Population Required to Make Decision
figure 8.2 Participation versus Speed of Decision Making.
Source: James M. Buchanan and Gordon Tullock, The Calculus
of Consent: Logical Foundations of Constitutional Democracy (Ann
Arbor: University of Michigan Press, 1962).
that has only one veto gate (the dictator’s will), to a perfect consensual
democracy in which all citizens have to agree to a policy.
The concept of veto gates in a sense reprises the conceptual framework laid out by James Buchanan and Gordon Tullock to explain the
principle of majority voting, where they posited a clear trade-off between
legitimacy and effectiveness (see ﬁgure 8.2).21 The more members of
a society who participate in a decision, the higher are the expected
decision costs; for large societies, the costs rise exponentially as one
approaches consensus. Buchanan and Tullock argued that the principle
of majority voting has no inherent normative logic; one can choose any
point on the curve in ﬁgure 8.2 as an appropriate trade-off between
effectiveness and legitimacy, and in the case of constitutional law, supermajorities are, indeed, often required.

Cloward
When this ground-breaking work was first published in 1971, it dramatically revised our understanding of the welfare system and its hidden role in the larger socioeconomic framework of the United States. Substantially updated for 1990s, this analysis ranges from the early history of poor relief through the inception of welfare during the Great Depression to its massive erosion during the Reagan and Bush years. The authors provide a conceptual framework that sharply illuminates the problems current and future administrations will encounter as they attempt to rethink the welfare system.
Political Science/Sociology/Social Work/0-679-74516-5
THE WORK OF NATIONS
Preparing Ourselves for list-Century Capitalism
by Robert B. Reich
“An elegant and penetrating analysis of the forces that are leading the globalization of the international system and to the growing anachronism of thinking solely in ‘national’ terms.

“There is little formal geometry in it, and, of course, no mention of Euclid, mostly heuristics, the kind of knowledge that comes out of a master guiding his apprentices . . . Builders could figure out the resistance of materials without the equations we have today—buildings that are, for the most part, still standing.”6
These examples do not show that theoretical knowledge is worthless. Quite the reverse. A conceptual framework is vital even for the most practical men going about their business. In many circumstances, new theories have led to direct technological breakthroughs (such as the atom bomb emerging from the Theory of Relativity).
The real issue here is speed. Theoretical change is itself driven by a feedback mechanism, as we noted in chapter 3: science learns from failure. But when a theory fails, like say when the Unilever mathematicians failed in their attempt to create an efficient nozzle design, it takes time to come up with a new, all-encompassing theory.

A scholar of business, MacMillan devised his framework in the context of his
field—business and management—and thus goes on to examine power dynamics within firms.
But there is no reason why his approach cannot be applied to other fields—which is what I do
in this book.
A third big advantage of this way of looking at power is that it lets us distinguish among
concepts such as power, might, force, authority, and influence. For instance, people commonly
confuse the difference between power and influence. Here, MacMillan’s conceptual
framework is very helpful. Both power and influence can change the behavior of others or,
more specifically, make others do something or stop them from doing it. But influence seeks to
change the perception of the situation, not the situation itself.2 So the MacMillan framework
helps show that influence is a subset of power, in the sense that power includes not only
actions that change the situation but also actions that alter the way the situation is perceived.

The degree of exploitation corresponds to the quantity of surplus labor time, that is, the portion of the working day that extends beyond the time necessary for the worker to produce value equal to the wage he or she is paid. Surplus labor time and the surplus value produced during that time are the key to Marx’s definition of exploitation. This temporal measure gave Marx a clear and convenient conceptual framework and also made his theory directly applicable in his era to the workers’ struggle to shorten the length of the working day.
But today, in the paradigm of immaterial production, the theory of value cannot be conceived in terms of measured quantities of time, and so exploitation cannot be understood in these terms. Just as we must understand the production of value in terms of the common, so too must we try to conceive exploitation as the expropriation of the common.

pages: 419words: 130,627

Last Man Standing: The Ascent of Jamie Dimon and JPMorgan Chase
by
Duff McDonald

“You’ll have to see for yourself, but there are issues here you aren’t aware of.” Little more was said on the issue, but an inexorable process had begun. (James Robinson had dispatched Sandy Weill in part by co-opting Peter Cohen. Weill would be damned if that was going to happen to him again. This time, he wasn’t going to go “up and out.”)
Over the next two days, the teams hashed out much of the conceptual framework for a merger, including a name—Citigroup—and a plan that the company would have three main divisions: the corporate investment bank, the consumer business, and asset management. Dimon quite reasonably assumed that he would be put in charge of running the corporate investment bank, the same job he had shared with Maughan at Salomon Smith Barney.
What he did not consider, given his place on the boards of Travelers and its predecessor companies for several years, was the possibility that he would not be on Citigroup’s board.

Sometimes, even that is useful.”7
By fits and starts, the firm had somehow found its way into an internationality all its own. “Its nerve center sits in New York,” the Sunday Times wrote in 1997. “Its managing partner works in Chicago. Its Global Institute is in Washington. But its truest address is in the world’s capital and industrial centers. It is a United Nations of consulting, with one crucial difference: It works.”8
“McKinsey always had great conceptual frameworks for analyzing problems,” added Fisher. “But what was better was that you could go to a place like Argentina and have market analysis done by people with familiarity with the market—Argentinians—but that had been American-trained. It’s astounding to me that they can run a global firm the way that they do. All management consultants are prima donnas who want to solve things their own way with their own conclusions.

It’s nonetheless undeniable that the two phones have a slew of overlapping functionalities and philosophies. There’s something that seems almost universal about the devices, maybe because their inventors were drawing from a rich shared history of technological concepts and pop-culture predictions.
It’s hard to shake the sense that the Simon was the iPhone in chrysalis, however obscured by black plastic and its now-comical size. The point isn’t that Apple ripped off the Simon. It’s that the conceptual framework for the smartphone, what people imagined they could do with a mobile computer, has been around far, far longer than the iPhone. Far longer than Simon, even.
“It’s this push and pull,” Novak says. “There’s this 2012 Tim Cook interview with Brian Williams. Tim Cook holds up his iPhone and says, ‘This is The Jetsons. I grew up watching The Jetsons, and this is The Jetsons.’ Of course it’s not.

Little medical research was being done in America—although the little that was done was significant—but even that little he had no part of. In Europe science was marching from advance to advance, breakthrough to breakthrough. The most important of these was the germ theory of disease.
Proving and elaborating upon the germ theory would ultimately open the way to confronting all infectious disease. It would also create the conceptual framework and technical tools that Welch and others later used to fight influenza.
Simply put, the germ theory said that minute living organisms invaded the body, multiplied, and caused disease, and that a specific germ caused a specific disease.
There was need for a new theory of disease. As the nineteenth century progressed, as autopsy findings were correlated with symptoms reported during life, as organs from animals and cadavers were put under a microscope, as normal organs were compared to diseased ones, as diseases became more defined, localized, and specific, scientists finally discarded the ideas of systemic illness and the humours of Hippocrates and Galen and began looking for better explanations.

We are shrinking the key feature size of technology, in accordance with the law of accelerating returns, at the exponential rate of approximately a factor of four per linear dimension per decade.68 At this rate the key feature sizes for most electronic and many mechanical technologies will be in the nanotechnology range—generally considered to be under one hundred nanometers—by the 2020s. (Electronics has already dipped below this threshold, although not yet in three-dimensional structures and not yet self-assembling.) Meanwhile rapid progress has been made, particularly in the last several years, in preparing the conceptual framework and design ideas for the coming age of nanotechnology.
As important as the biotechnology revolution discussed above will be, once its methods are fully mature, limits will be encountered in biology itself. Although biological systems are remarkable in their cleverness, we have also discovered that they are dramatically suboptimal. I've mentioned the extremely slow speed of communication in the brain, and as I discuss below (see p. 253), robotic replacements for our red blood cells could be thousands of times more efficient than their biological counterparts.69 Biology will never be able to match what we will be capable of engineering once we fully understand biology's principles of operation.

Robin: I don’t like the term “computer scientist” because it puts too much emphasis on the computer, and I think the computer is just an instance of informatic behaviour, so I would say “informatic scientist.” Of course it depends on what you mean by informatics. I tend to think it means acts of calculation and communication, communication being very important.
What is your role as informatic scientist?
Robin: My role, I think, is to try to create a conceptual framework within which analysis can happen. To do this you have to take account of what is actually happening in software, like for example this notion of ubiquitous system, but you try to abstract from that in some ways. This is truly difficult; you will make mistakes, you will invent the wrong concepts, they won’t fly in a sense, they won’t scale up. You are looking for elementary notions that can be scaled up, so that they can actually be used to explain existing or proposed large software systems.

Here’s another pair of examples that we created by computing the bigrams over the text of a children’s story, The Adventures of Buster
Brown (included in the Project Gutenberg Selection Corpus):
(4)
a. He roared with me the pail slip down his back
b. The worst part and clumsy looking for whoever heard light
You intui