The Unity of Science

The topic of unity in the sciences can be explored through the
following questions: Is there one privileged, most basic or
fundamental concept or kind of thing, and if not, how are the
different concepts or kinds of things in the universe related? Can the
various natural sciences (e.g.,physics, astronomy, chemistry, biology)
be unified into a single overarching theory, and can theories within a
single science (e.g., general relativity and quantum theory in
physics, or models of evolution and development in biology) be
unified? Are theories or models the relevant connected units? What
other connected or connecting units are there? Does the unification of
these parts of science involve only matters of fact or are matters of
value involved as well? What about matters of method, material,
institutional, ethical and other aspects of intellectual cooperation?
Moreover, what kinds of unity, not just units, in the sciences are
there? And is the relation of unification one of reduction,
translation, explanation, logical inference, collaboration or
something else? What roles can unification play in scientific
practices, their development, application and evaluation?

1. Historical development in philosophy and science from Greek philosophy to Logical Empiricism in America

1.1 From Greek thought to Western science

Unity has a history as well as a logic. Different formulations and
debates express intellectual and other resources and interests in
different contexts. Questions about unity belong partly in a tradition
of thought that can be traced back to pre-Socratic Greek cosmology, in
particular to the preoccupation with the question of the One and the
Many. In what senses are the world and, as a result, our knowledge of it
one? A number of representations of the world in terms of a few simple
constituents that were considered fundamental emerged: Parmenides’
static substance, Heraclitus’ flux of becoming, Empedocles’ four
elements, Democritus’ atoms, or Pythagoras’ numbers, Plato’s forms,
and Aristotle’s categories. The underlying question of the unity of
our types of knowledge was explicitly addressed by Plato in the
Sophist as follows: “Knowledge also is surely one, but
each part of it that commands a certain field is marked off and given
a special name proper to itself. Hence language recognizes many arts
and many forms of knowledge” (Sophist, 257c). Aristotle
asserted in On the Heavens that knowledge concerns what is
primary, and different “sciences” know different kinds of
causes; it is metaphysics that comes to provide knowledge of the
underlying kind.

With the advent and expansion of Christian monotheism, the
organization of knowledge reflected the idea of a world governed by
the laws dictated by God, its creator and legislator. From this
tradition emerged encyclopedic efforts such as the
Etymologies, compiled in the sixth century by the Andalusian
Isidore, Bishop of Seville, the works of the Catalan Ramon Llull in
the Middle Ages and those of the Frenchman Petrus Ramus in the
Renaissance. Llull introduced iconic tree-diagrams and
forest-encyclopedias representing the organization of different
disciplines including law, medicine, theology and logic. He also
introduced more abstract diagrams—not unlike some found in
Cabbalistic and esoteric traditions—in an attempt to
combinatorially encode the knowledge of God’s creation in a
universal language of basic symbols. Their combination would be
expected to generate knowledge of the secrets of creation and help
articulate knowledge of universal order (mathesis
universalis), which would, in turn, facilitate communication with
different cultures and their conversion to Christianity. Ramus
introduced diagrams representing dichotomies and gave prominence to
the view that the starting point of all philosophy is the
classification of the arts and sciences. The encyclopedia organization
of knowledge served the project of its preservation and
communication.

The emergence of a distinctive tradition of scientific thought
addressed the question of unity through the designation of a
privileged method, which involved a privileged language and set of
concepts. Formally, at least, it was modeled after the Euclidean ideal
of a system of geometry. In the late 16th century, Francis Bacon held
that one unity of the sciences was the result of our organization of
records of discovered material facts in the form of a pyramid with
different levels of generalities. These could be classified in turn
according to disciplines linked to human faculties. Concomitantly, the
controlled interaction with phenomena of study characterized so-called
experimental philosophy. In accordance with at least three
traditions—the Pythagorean tradition, the Bible’s dictum in the
Book of Wisdom and the Italian commercial tradition of
bookkeeping—, Galileo proclaimed at the turn of the 17th century
that the Book of Nature had been written by God in the language of
mathematical symbols and geometrical truths; and that in it, the story
of Nature’s laws was told in terms of a reduced set of objective,
quantitative primary qualities: extension, quantity of matter and
motion. A persisting rhetorical role for some form of theological
unity of creation should not be neglected when considering
pre-20th-century attempts to account for the possibility and
desirability of some form of scientific knowledge. Throughout the
17th century, mechanical philosophy and Descartes’ and Newton’s
systematization from basic concepts and first laws of mechanics became
the most promising framework for the unification of natural
philosophy. After the demise of Laplacian molecular physics in the
first half of the 19th century, this role was taken over by ether
mechanics and, unifying forces and matter, energy physics.

1.2 Rationalism and Enlightenment

Descartes and Leibniz gave this tradition a rationalist twist that was
centered on the powers of human reason and the ideal of system of
knowledge, on a foundation of rational principles. It became the project
of a universal framework of exact categories and ideas, a mathesis
universalis (Garber 1992 and Gaukroger 2002). Adapting the
scholastic image of knowledge, Descartes proposed an image of a tree
in which metaphysics is depicted by the roots, physics by the trunk, and the
branches depict mechanics, medicine and morals. Leibniz proposed a
general science in the form of a demonstrative
encyclopedia. This would be based on a “catalogue of simple
thoughts” and an algebraic language of symbols,
characteristica universalis, which would render all knowledge
demonstrative and allow disputes to be resolved by precise
calculation. Both defended the program of founding much of physics on
metaphysics and ideas from life science (Smith 2011) (Leibniz’s
unifying ambitions with symbolic language and physics extended beyond
science, to settle religious and political fractures in Europe). By
contrast, while sharing a model of geometric axiomatic structure of
knowledge, Newton’s project of natural philosophy was meant to be
autonomous from a system of philosophy and, in the new context, still
endorsed for its model of organization and its empirical reasoning
values of formal synthesis and ontological simplicity (see the entry
on
Newton
and Janiak 2008).

Belief in the unity of science or knowledge, along with the
universality of rationality, was at its strongest during the European
Enlightenment. The most important expression of the encyclopedic
tradition came in the mid-eighteenth century from Diderot and
D’Alembert, editors of the Encyclopédie, ou dictionnaire
raisonné des sciences, des arts et des métiers
(1751–1772). Following earlier classifications by Nichols and
Bacon, their diagram presenting the classification of intellectual
disciplines was organized in terms of a classification of human
faculties. Diderot stressed in his own entry,
“Encyclopaedia”, that the word signifies the
unification of the sciences. The function of the encyclopedia was to
exhibit the unity of human knowledge. Diderot and D’Alembert, in
contrast with Leibniz, made classification by subject the primary
focus, and introduced cross-references instead of logical connections.
The Enlightenment tradition in Germany culminated in Kant’s critical
philosophy.

1.3 German tradition since Kant

Kant saw as one of the functions of philosophy to determine the precise
unifying scope and value of each science. For Kant, the unity of
science is not the reflection of a unity found in nature, or, even less, assumed in a real world behind the apparent phenomena. Rather, it
has its foundations in the unifying a priori character or function of
concepts, principles and of Reason itself. Nature is precisely our
experience of the world under the universal laws that include some
such concepts. And science, as a system of knowledge, is “a
whole of cognition ordered according to principles”, and the
principles on which proper science is grounded are a priori (Preface
to Metaphysical Foundations of Natural Science). A devoted
but not exclusive follower of Newton’s achievements and insights, he
maintained through most of his life that mathematization and a priori
universal laws given by the understanding of it were preconditions for
genuine scientific character (like Galileo and Descartes earlier, and
Carnap later, Kant believed that mathematical exactness constituted
the main condition for the possibility of objectivity). Here Kant
emphasized the role of mathematics coordinating a priori cognition and
its determined objects of experience. Thus, he contrasted the methods
employed by the chemist, a “systematic art” organized by
empirical regularities, with those employed by the mathematician or
physicist, which were organized by a priori laws, and held that
biology is not reducible to mechanics—as the former involves
explanations in terms of final causes—(see Critique of Pure
Reason, Critique of Judgment and Metaphysical
Foundations of Natural Science). With regards to
biology—insufficiently grounded in the fundamental forces of
matter—its inclusion requires the introduction of the idea of
purposiveness (McLaughlin 1991). More generally, for Kant unity was a
regulative principle of reason, namely, an ideal guiding the process
of inquiry toward a complete empirical science with its empirical
concepts and principles grounded in the so-called concepts and
principles of the understanding that constitute and objectify
empirical phenomena (on systematicity as a distinctive aspect of this ideal and on its origin
in reason, see Kitcher 1986 and Hoyningen-Huene 2013).

Kant’s ideas set the frame of reference for discussions of the
unification of the sciences in German thought throughout the
nineteenth century (Wood and Hahn 2011). He gave philosophical
currency to the notion of worldview (Weltanschauung) and,
indirectly, world-picture (Weltbild), establishing
among philosophers and scientists the notion of unity of science as an intellectual
ideal. From Kant, German-speaking Philosophers of Nature adopted the
image of Nature in terms of interacting forces or powers and developed
it in different ways; this image found its way to British natural
philosophy. In Great Britain this idealist, unifying spirit (and other
notions of an idealist and romantic turn) was articulated in William
Whewell’s philosophy of science. Two unifying dimensions are these:
his notion of mind-constructed fundamental ideas, which form the basis
for organizing axioms and phenomena and classifying sciences, and the
argument for the reality of explanatory causes in the form of
consilience of induction, wherein a single cause is
independently arrived at as the hypothesis explaining different kinds
of phenomena.

In face of expanding researches, the unifying emphasis on organization, classification and foundation led to exploring differences and rationalizing boundaries. The German intellectual current culminated in the late nineteenth
century in the debates among philosophers such as Windelband, Rickert
and Dilthey. In their views and those of similar thinkers, a
worldview often included elements of evaluation and life meaning.
Kant had established the basis for the famous distinction between the
natural sciences (Naturwissenschaften) and the cultural, or
social, sciences (Geisteswissesnschaften) popularized in
theory of science by Wilhelm Dilthey and Wilhelm Windelband. Dilthey,
Windelband, his student Heinrich Rickert and Max Weber (although the
first two preferred Kulturwissenschaften, which excluded
psychology) debated over how differences in subject matter between
the two kinds of sciences forced a distinctive difference between
their respective methods. Their preoccupation with the historical
dimension of the human phenomena, along with the Kantian emphasis on
the conceptual basis of knowledge, led to the suggestion that the
natural sciences aimed at generalizations about abstract types and
properties, whereas the human sciences studied concrete individuals and
complexes. The human case suggested a different approach based on
valuation and personal understanding (Weber’s verstehen). For
Rickert, individualized concept formation secured knowledge of
historical individuals by establishing connections to recognized
values (rather than personal valuations). In biology, Ernst Haeckel
defended a monistic worldview (Richards 2008).

This approach stood in opposition to the prevailing empiricist views that,
since the time of Hume, Comte and Mill, held that the moral or
social sciences (even philosophy) relied on conceptual and
methodological analogies with geometry and the natural sciences, not
just astronomy and mechanics, also with biology. In the Baconian tradition, Comte
emphasized a pyramidal hierarchy of disciplines in his
“encyclopedic law” or order, from the most general
sciences about the simplest phenomena to the most specific sciences
about the most complex phenomena, each depending on knowledge from its
more general antecedent: from inorganic physical sciences (arithmetic,
geometry, mechanics, astronomy, physics and chemistry) to the organic
physical ones, such as biology and the new “social
physics”, soon to be renamed sociology (Comte 1830–1842).
Mill, instead, pointed to the diversity of methodologies for
generating, organizing and justifying associated knowledge with
different sciences, natural and human, and the challenges to impose a
single standard (Mill 1843, Book VI). He came to view political
economy eventually as an art, a tool for reform more than a system of
knowledge (Snyder 2006).

The Weltbild tradition influenced the physicists Max Planck
and Ernst Mach, who engaged in a heated debate about the precise
character of the unified scientific world-picture. Mach’s more influential view was both
phenomenological and Darwinian: the unification of knowledge took the
form of an analysis of ideas into biologically embodied elementary
sensations (neutral monism) and was ultimately a matter of adaptive
economy of thought. Planck adopted a realist view that took science to
gradually approach complete truth about the world, and fundamentally
adopted the thermodynamical principles of energy and entropy (on the
Mach-Planck debate see Toulmin 1970). These world-pictures constituted
some of the alternatives to a long-standing mechanistic view that,
since the rise of mechanistic philosophy with Descartes and Newton,
had informed biology as well as most branches of physics. In the background was the perceived conflict between the so-called mechanical and electromagnetic worldviews, which resulted throughout
the first two decades of the twentieth century in the work of Albert
Einstein (Holton 1998).

In the same German tradition and amidst the proliferation of work on energy physics and books on
unity of science, the German energeticist Wilhelm Ostwald declared the
20th century the “Monistic century”. During the 1904
World’s Fair in St. Louis, the German psychologist and Harvard
professor Hugo Munsterberg organized a Congress under the title
“Unity of Knowledge”; invited speakers were Ostwald,
Ludwig Boltzmann, Ernest Rutherford, Edward Leamington Nichols, Paul
Langevin and Henri Poincaré. In 1911 the International
Committee of Monism held its first meeting in Hamburg, with Ostwald
presiding.[1]
Two years later it published Ostwald’s monograph, Monism as the
Goal of Civilization. In 1912, Mach, Felix Klein, David Hilbert,
Einstein, and others signed a manifesto aiming at the development of a
comprehensive world-view. Unification remained a driving scientific
ideal. In the same spirit, Mathieu Leclerc du Sablon published his
L’Unité de la Science (1919), exploring metaphysical
foundations, and Johan Hjorst published The Unity of Science
(1921), sketching out a history of philosophical systems and unifying
scientific hypotheses.

1.4 Unity and reductionism in logical empiricism

The question of unity engaged science and philosophy alike. In the
20th century the unity of science became a distinctive theme of the
scientific philosophy of logical empiricism. Logical
empiricists—known controversially also as logical
positivists—and most notably the founding members of the Vienna
Circle in their Manifesto adopted the Machian banner of “unity
of science without metaphysics”, a normative criterion of unity
with a role in social reform based on the demarcation between science
and metaphysics: the unity of method and language that included all
the sciences, natural and social. A common method did not necessarily
imply a more substantive unity of content involving theories and their
concepts.

A stronger reductive model within the Vienna Circle was recommended by
Rudolf Carnap in his The Logical Construction of the World
(1928). While embracing the Kantian connotation of the term “constitutive
system”, it was inspired by recent formal standards: Hilbert’s
axiomatic approach to formulating theories in the exact sciences and
Frege’s and Russell’s logical constructions in mathematics. It was
also predicated on the formal values of simplicity, rationality, (philosophical) neutrality and
objectivity associated with scientific knowledge. In particular, Carnap tried to explicate such notions in terms of a rational reconstruction of science in terms of a method and a structure based on logical constructions out of (1)
basic concepts in axiomatic structures and (2) rigorous, reductive logical
connections between concepts at different levels.

Different constitutive systems or logical
constructions would serve different (normative) purposes: a theory of
science and a theory of knowledge. Both foundations raised the issue
of the nature and universality of a physicalist language.

One such systems of unified science is the theory of science, in which the
construction connects concepts and laws of the different sciences at
different levels, with physics—with its genuine laws—as
fundamental, lying at the base of the hierarchy. Because of the
emphasis on the formal and structural properties of our
representations, objectivity, rationality and unity go hand in hand.
Carnap’s formal emphasis developed further in Logical Syntax of
Language (1934). Alternatively, all scientific concepts could be
constituted or constructed in a different system in the protocol
language out of classes of elementary complexes of experiences,
scientifically understood, representing experiential concepts.
Carnap subsequently defended the epistemological and methodological
universality of physicalist language and physicalist statements. Unity of science
in this context was an epistemological project (for a survey of the
epistemological debates, see Uebel 2007; on different strands of the
anti-metaphysical normative project of unity see Frost-Arnold
2005).

Whereas Carnap aimed at rational reconstructions, another member of
the Vienna Circle, Otto Neurath, favored a more naturalistic and
pragmatic approach, with a less idealized and reductive model of
unity. His evolving standards of unity were generally motivated by the complexity of
empirical reality and the application of empirical knowledge to practical goals. He spoke of an
“encyclopedia-model”,opposed to the classic ideal of a
pyramidal, reductive “system-model”. The
encyclopedia-model took into account the presence within science of
uneliminable and imprecise terms from ordinary language and the social
sciences and emphasized a unity of language and the local exchanges of
scientific tools. Specifically, Neurath stressed the
material-thing-language called “physicalism”, not to be
confounded with the emphasis on the vocabulary of physics. its
motivation was partly epistemological and Neurath endorsed
anti-foundationalism: No unified science, like a boat at sea, would
rest on firm foundations. The scientific spirit abhorred dogmatism.
This weaker model of unity emphasized empiricism and the normative
unity of the natural and the human sciences.

Like Carnap’s unified reconstructions, Neurath’s had pragmatic
motivations. Unity without reductionism provided a tool for
cooperation and it was motivated by the need for successful
treatment—prediction and control—of complex phenomena in
the real world that involved properties studied by different theories
or sciences (from real forest fires to social policy): unity of
science at the point of action (Cat, Cartwright and Chang 1996). It is
an argument from holism, the counterpart of Duhem’s claim that only
clusters of hypotheses are confronted with experience. Neurath spoke
of a “boat”, a “mosaic”, an
“orchestration”, and a “universal jargon”.
Following institutions such as the International Committee on Monism
and the International Council of Scientific Unions, Neurath
spearheaded a movement for Unity of Science in 1934 that encouraged
international cooperation among scientists and launched the project of
an International Encyclopedia of Unity of Science. It expressed the
internationalism of his socialist convictions and the international
crisis that would lead to the Second World War (Kamminga and Somsen 2016).

At the end of the Eighth International Congress of Philosophy, held in
Prague in September of 1934, Neurath proposed a series of
International Congresses for the Unity of Science. These took place in
Paris, 1935; Copenhagen, 1936; Paris, 1937; Cambridge, England, 1938;
Cambridge, Massachusetts, 1939 and Chicago, 1941. For the organization
of the congresses and related activities, Neurath founded the Unity of
Science Institute in 1936, which was renamed in 1937 as the
International Institute for the Unity of Science, alongside the
International Foundation for Visual Education, founded in 1933. The
Institute’s executive committee was composed of Neurath, Philip Frank
and Charles Morris.

After the Second World War, a discussion of unity engaged philosophers
and scientists in the Inter-Scientific Discussion Group, first the
Science of Science Discussion Group, in Cambridge, Massachusetts,
(founded primarily by Philip Frank and Carnap, themselves founders of
the Vienna Circle, Quine, Feigl, Bridgman, and the psychologists E.
Boring and S.S. Stevens in October 1940) which would later become the
Unity of Science Institute. The group was joined by scientists from
different disciplines, from quantum mechanics (Kemble and Van Vleck)
and cybernetics (Wiener) to economics (Morgenstern), as part of what
was both a self-conscious extension of the Vienna Circle and a
reflection of local concerns within a technological culture increasingly dominated by the interest in computers and nuclear
power. The characteristic feature of the new view of unity was the
ideas of consensus and subsequently, especially within the USI,
cross-fertilization. These ideas were instantiated in the emphasis on
scientific operations (operationalism) and the creation of war-boosted
cross-disciplines such as cybernetics, computation, electro-acoustics,
psycho-acoustics, neutronics, game theory, and biophysics (Galison
1998 and Hardcastle 2003).

In the late 1960s, Michael Polanyi and Marjorie Grene organized a
series of conferences funded by the Ford Foundation on unity of
science themes (Grene 1969a, 1969b, 1971). Their general character was
interdisciplinary and anti-reductionist. The group was originally
called “Study Group on Foundations of Cultural Unity,”
but this was later changed to “Study Group on the Unity of
Knowledge.” By then, a number of American and international
institutions were already promoting interdisciplinary projects in
academic areas (Klein 1990). For both Neurath and Polanyi the
organization of knowledge and science, the Republic of Science, was
inseparable from ideals of political organization.

2. Varieties of Unity

The historical introductory sections have aimed to show the
intellectual centrality, varying formulations, and significance of the
concept of unity. The rest of the entry presents a variety of modern themes and
views. It will be helpful to introduce a
number of broad categories and distinctions that can sort out different kinds of accounts
and track some relations between them as well as additional significant
philosophical issues. (The categories are not mutually exclusive, and they
sometimes partly overlap; therefore; while they help label and
characterize different positions, they cannot provide a simple, easy
and neatly ordered conceptual map.)

Connective unity is a weaker notion than the specific ideal of
reductive unity; this requires asymmetric relations of
reduction, with assumptions about hierarchies of levels of description
and the primacy—conceptual, ontological, epistemological, and so
on—of a fundamental representation. The category of connective
unity helps accommodate and bring attention to the diversity of
non-reductive accounts.

Another useful distinction is between synchronic and
diachronic unity. Synchronic accounts are ahistorical,
assuming no meaningful temporal relations. Diachronic accounts, by
contrast, introduce genealogical hypotheses involving asymmetric
temporal and causal relations between entities or states of the
systems described. Evolutionary models are of this kind; they may be
reductive to the extent that the posited original entities are simpler
and on a lower level of organization and size. Others simply emphasize
connection without overall directionality.

In general, it is useful to distinguish between ontological
unity and epistemological unity, even if many accounts
bear both characteristics and fall under both rubrics. In some cases,
one kind supports the other salient kind in the model. Ontological
unity is here broadly understood as involving relations between
descriptive conceptual elements; in some cases the concepts will describe
entities, facts, properties or relations, and descriptive models will focus on
metaphysical aspects of the unifying connections such as holism,
emergence, or downwards causation. Epistemological unity applies to
epistemic relations or goals such as explanation. Methodological connections and
formal (logical, mathematical, etc.) models may belong in this kind. I
will not draw any strict or explicit distinction between
epistemological and methodological dimensions or modes of unity.

Additional categories and distinctions include the following:
vertical unity or inter-level unity is unity of
elements attached to levels of analysis, composition or organization on a hierarchy, whether for a
single science or more, whereas horizontal unity or
intra-level unity applies to one single level and to its
corresponding kind of system (Wimsatt 2007). Global unity is
unity of any other variety with a universal quantifier of all kinds of
elements, aspects or descriptions associated with individual sciences
as a kind of monism, for instance, taxonomical monism about
natural kinds, while local unity applies to a subset
(Cartwright has distinguished this same-level global form of
reduction, or "imperialism", in Cartwright 1999; see also Mitchell
2003). Obviously, vertical and horizontal accounts of unity can be
either global or local. Finally, the rejection of global unity has
been associated with isolationism, keeping independent
competing alternative representations of the same phenomena or
systems, as well as local integration, the local connective
unity of the alternative perspectives. A distinction of methodological
nature contrasts internal and external perspectives,
according to whether the accounts are based naturalistically, on the
local contingent practices of certain scientific communities at a
given time, or based on universal metaphysical assumptions broadly
motivated (Ruphy 2017). (Ruphy has criticized Cartwright and Dupré
for having adopted external metaphysical positions and defended the
internal perspective, also present in the program of the so-called
Minnesota School, i.e., Kellert et al. 2006.)

3. Epistemological Unities

3.1 Reduction

Philosophy of science became professionally consolidated in the 1950s around a
positivist orthodoxy that may be characterized by the following set of commitments: a syntactic
formal approach to theories, logical deductions and axiomatic systems, a distinction between theoretical and observational vocabularies,
and empirical generalizations. Unity and especially reduction have been
understood in those terms; specific elements of the dominating
accounts would stand and fall with the attitudes towards the elements
of the orthodoxy mentioned above. First, a reminder:
Reductionism must be distinguished from reduction.
Reductionism is the adoption of reduction as the global ideal of a
unified structure of scientific knowledge and a measure of its
progress towards that ideal. As before, I will consider methodological aspects of unity
as an extension of epistemological matters, insofar as methodology
serves epistemology.

Two formulations of unification in the logical positivist tradition of the
ideal logical structure of science placed the question of unity at the core of philosophy of science: Carl Hempel’s
deductive-nomological model of explanation and Ernst Nagel’s model of
reduction. Both are fundamentally epistemological models, and both
are specifically explanatory, at least in the sense that explanation
serves unification. The emphasis on language and logical structure makes
explanatory reduction a form of unity of the synchronic kind. Still, Nagel’s
model of reduction is a model of scientific structure and explanation
as well as of scientific progress. It is based on the problem of
relating different theories as different sets of theoretical
predicates.

Reduction requires two conditions: connectability and
derivability. Connectability of laws of different theories
requires meaning invariance in the form of extensional
equivalence between descriptions, with bridge principles between
coextensive but distinct terms in different theories.

Nagel’s account distinguishes two kinds of reductions: homogenous and
heterogeneous. When both sets of terms overlap, the reduction
is homogeneous. When the related terms are different, the reduction is
heterogeneous. Derivability requires a deductive relation between the
laws involved. In the quantitative sciences, the derivation often
involved taking a limit. In this sense the reduced science is
considered an approximation to the reducing new one.

Neo-Nagelian accounts have attempted to solve Nagel’s problem of
reduction between putatively incompatible theories. Here are a
few:

Nagel’s two-term relation account has been modified by weaker
conditions of analogy and a role for conventions, requiring it to be satisfied
not necessarily by the two original theories, \(T_1\) and
\(T_2\), which are respectively new and old and more and
less general, but by the modified theories \(T'_1\)
and \(T'_2\). Explanatory reduction is strictly a
four-term relation in which \(T'_1\) is
“strongly analogous” to \(T_1\) and corrects,
with the insight that the more fundamental theory can offer, the older
theory, \(T_2\), changing it to
\(T'_2\). Nagel’s account also requires that bridge laws be
synthetic identities, in the sense that they be factual, empirically
discoverable and testable; in weaker accounts, admissible bridge laws may include elements of convention (Schaffner 1967;
Sarkar 1998). The difficulty lay especially with the task of
specifying or giving a non-contextual, transitive account of the
relations between \(T\) and \(T'\) (Wimsatt 1976).

An alternative set of semantic and syntactic conditions of reduction
bears counterfactual interpretations. For instance, syntactic
conditions in the form of limit relations and ceteris paribus
assumptions help explain why the reduced theory works where it does
and fails where it does not (Glymour 1969).

A different approach to reductionism acknowledges a commitment to
providing explanation but rejects the value of a focus on the role
of laws. This approach typically draws a distinction between hard
sciences such as physics and chemistry and special sciences such as
biology and the social sciences. It claims that laws that are in a sense
operative in the hard sciences are not available in the special ones, or play a more
limited and weaker role, and this on account of historical character, complexity or reduced scope. The rejection of empirical laws in biology, for instance,
has been argued on grounds of historical dependence on contingent initial
conditions (Beatty 1995), and as matter of supervenience (see the
entry on
supervenience)
of spatio-temporally restricted functional claims on lower level
molecular ones, and the multiple realization (see the entry on
multiple realizability)
of the former by the latter (Rosenberg 1994; Rosenberg’s argument
from supervenience to reduction without laws must be contrasted with
Fodor’s physicalism about the special sciences about laws without
reduction (see below and the entry on
physicalism);
for a criticism of these views see Sober 1996). This non-Nagelian
approach assumes further that explanation rests on identities
between predicates and deductive derivations (reduction and
explanation might be said to be justified by derivations, but not
constituted by them; see Spector 1978). Explanation is provided by lower-level
mechanisms; their explanatory role is to replace final why-necessarily
questions (functional) with proximate how-possibly questions
(molecular).

One suggestion to make sense of the possibility of the supervening
functional explanations without Nagelian reduction is a metaphysical
picture of composition of powers in explanatory mechanisms (Gillette
2010). The reductive commitment to the lower level is based on relations of composition,
at play in epistemological analysis and metaphysical synthesis, but is merely formal and
derivational. We infer what composes the higher level but we cannot
simply get all the relevant knowledge of the higher level from our
knowledge of the lower level (see also Auyang 1998).

A more general characterization views reductionism as a research
strategy. On this methodological view reductionism can be
characterized by a set of so-called heuristics (non-algorithmic,
efficient, error-based, purpose-oriented, problem-solving tasks)
(Wimsatt 2006): heuristics of conceptualization (e.g., descriptive
localization of properties, system-environment interface determinism,
level and entity-dependence), heuristics of model-building and theory
construction (e.g., model intra-systemic localization with emphasis of
structural properties over functional ones, contextual simplification and
external generalization) and heuristics of observation and
experimental design (e.g., focused observation, environmental control, local
scope of testing, abstract shared properties, behavioral regularity and
context-independence of results).

3.2 Antireductionism

The focus had been since the 1930s on a syntactic approach, with
physics as the paradigm of science, deductive logical relations as the
form of cognitive or epistemic goals such as explanation and
prediction, and theory and empirical laws as paradigmatic units of
scientific knowledge (Suppe 1977; Grünbaum and Salmon 1988). The
historicist turn in the 1960s, the semantic turn in philosophy of
science in the 1970s and a renewed interest in special sciences has
changed this focus. The very structure of hierarchy of levels has
lost its credibility, even for those who believe in it as a model of
autonomy of levels rather than as an image of fundamentalism. The
rejection of such models and their emendations have occupied the last
four decades of philosophical discussion about unity in and of the
sciences (especially in connection to psychology and biology, and more
recently chemistry). A valuable consequence has been the strengthening
of philosophical projects and communities devoting more sustained and
sophisticated attention to special sciences, different from
physics.

The first target of antireductionist attacks has been Nagel’s demand
of extensional equivalence. It has been dismissed as an inadequate demand of “meaning
invariance” and approximation, and with it the possibility of
deductive connections. Mocking the positivist legacy of progress
through unity, empiricism and anti-dogmatism, these constraints have
been decried as intellectually dogmatic, conceptually weak and
methodologically overly restrictive (Feyerabend 1962). The emphasis is
placed, instead, on the merits of the new theses of incommensurability
and methodological pluralism.

A similar criticism of reduction involves a different move: that the
deductive connection be guaranteed provided that the old, reduced
theory was “corrected” beforehand (Shaffner 1967). The
evolution and the structure of scientific knowledge could be neatly
captured, using Schaffner’s expression, by “layer-cake
reduction.” The terms “length” and
“mass”—or the symbols \(l\) and
\(m\)—, for instance, may be the same in Newtonian and
Relativistic mechanics, or the term “electron” the same in
classical physics and quantum mechanics, or the term
“atom” the same in quantum mechanics and in chemistry, or
“gene” in Mendelian genetics and molecular genetics (see,
for instance, Kitcher 1984). But the corresponding concepts, they
argued, are not. Concepts or words are to be understood as getting
their content or meaning within a holistic or organic structure, even
if the organized wholes are the theories that include them. From this
point of view, different wholes, whether theories or Kuhnian
paradigms, manifest degrees of conceptual incommensurability.
As a result, the derived, reducing theories typically are not the
allegedly reduced, older ones; and their derivation sheds no relevant
insight into the relation between the original, older one and the new
(Feyerabend 1962; Sklar 1967).

From a historical standpoint, the positivist model collapsed the
distinction between synchronic and diachronic reduction, that is, between
reductive models of the structure and the evolution, or succession, of
scientific theories. By contrast, historicism, as embraced by Kuhn and Feyerabend, drove a
wedge between the two dimensions and rejected the linear model of
scientific change in terms of accumulation and replacement. For Kuhn,
replacement becomes partly continuous, partly non-cumulative
change in which one world—or, less literally, one world-picture,
one paradigm—replaces another (after a revolutionary episode of
crisis and proliferation of alternative contenders) (Kuhn 1962). This
image constitutes a form of pluralism, and, like the
reductionism it is meant to replace, it can be either
synchronic or diachronic. Here is where Kuhn and
Feyerabend parted ways. For Kuhn synchronic pluralism only describes
the situation of crisis and revolution between paradigms. For
Feyerabend history is less monistic, and pluralism is and should
remain a synchronic and diachronic feature of science and culture
(Feyerabend, here, thought science and society inseparable, and
followed Mill’s philosophy of liberal individualism and
democracy).

A different kind of antireductionism addresses a more conceptual
dimension, the problem of categorial reduction:
Meta-theoretical categories of description and interpretation for
mathematical formalisms, e.g., criteria of causality, may block full
reduction. Basic interpretative concepts that are not just variables
in a theory or model are not reducible to counterparts in fundamental
descriptions (Cat 2000 and 2006; the case of individuality in quantum
physics has been discussed in Healey 1991; Redhead and Teller 1991 and
Auyang 1995; in psychology in Block 2003).

Unity has been considered an epistemic virtue, with different modes of
unification associated with roles such as demarcation, explanation and
evidence.

Demarcation. Certain models of unity, which we may call
container models, attempt to to demarcate science from non-science.
The criteria adopted are typically methodological and normative, not
descriptive. Unlike connective models, they serve a dual function of
drawing up and policing a boundary that (1) encloses and endorses the
sciences and (2) excludes other practices. As noted above, some
demarcation projects have aimed to distinguish between natural and special
sciences. The more notorious ones, however, have aimed to
exclude practices and doctrines dismissed under the labels of
metaphysics, pseudo-science or popular knowledge. Empirical or not,
the applications of standards of epistemic purity are not merely
identification or labeling exercises for the sake of carving out
scientific inquiry as a natural kind or mapping out intellectual landscapes.The
purpose is to establish authority and the stakes involve educational,
legal and financial interests. Recent controversies include not just
the teaching of creation science, also polemics over the scientific
status of, for instance, homeopathy, vaccination and models of plant neurology and
climate change.

The most influential demarcation criterion has been Popper’s original
anti-metaphysics barrier: the condition of empirical falsifiability of
scientific statements. It required the logically possible relation to basic statements, linked to experience, that can
prove general hypotheses to be false with certainty. For this purpose he
defended the application of a particular deductive argument, the
modus tollens (Popper 1935/1951). Another demarcation
criterion is explanatory unity, empirically grounded. Hempel’s
deductive-nomological model characterizes the scientific explanation
of events as a logical argument that expresses their expectability in
terms of their subsumption under an empirically testable
generalization. Explanations in the historical sciences too must fit
the model if they are to count as scientific. They could then be brought
into the fold as bona fide scientific explanations even if they could
qualify only as explanation sketches.

Since their introduction, Hempel’s model and its weaker versions have
been challenged as neither generally applicable not appropriate. The
demarcation criterion of unity is undermined by criteria of
demarcation between natural and historical sciences. For instance,
historical explanations have a genealogical or narrative form, or
else they require the historian’s engaging problems or issuing
a conceptual judgment that brings together meaningfully a set of
historical facts (recent versions of such decades-old arguments are in
Cleland 2002, Koster 2009, Wise 2011). According to more radical
views, natural sciences such as geology and biology are historical in
their contextual, causal and narrative forms; also that Hempel’s
model, especially the requirement of empirically testable strict
universal laws, is satisfied by neither the physical sciences nor the
historical sciences, including archeology and biology (Ereshefsky
1992).

A number of legal decisions have appealed to Popper’s and Hempel’s
criteria, adding the epistemic role of peer review, publication and
consensus around the sound application of methodological standards. A
more recent criterion has sought a different kind of demarcation: it
is comparative rather than absolute; it aims to compare science and
popular science; it adopts a broader notion of in the German tradition
of Wissenschaften, that is, roughly of scholarly fields of
research that include formal sciences, natural sciences, human
sciences and the humanities; and it emphasizes the role of systematicity,
with an emphasis on different forms of epistemic connectedness as weak
forms of coherence and order (Hoyningen-Huene 2013).

Explanation. Unity has been defended in the wake of authors
such as Kant and Whewell as an epistemic criterion of explanation or
at least fulfilling an explanatory role. In other words, rather than
modeling unification in terms of explanation, explanation is modeled
in terms of unification. A number of proposals introduce an
explanatory measure in terms of the number of independent explanatory
laws or phenomena conjoined in a theoretical structure. On this representation, unity contributes
understanding and confirmation from the fewest basic kinds of
phenomena, regardless of explanatory power in
terms of derivation or argument patterns (Friedman
1974; Kitcher 1981; Kitcher 1989; Wayne 1996; within a probabilistic framework,
Myrvold 2003, Sober 2003 and Roche and Sober 2017; see below).

A weaker position argues that unification is not explanation on the
grounds that unification is simply systematization of old beliefs and
operates as a criterion of theory-choice (Halonen and Hintikka
1999).

The unification account of explanation has been defended within a more
detailed cognitive and pragmatist approach. The key is to think of
explanations as question-answer episodes involving four elements: the
explanation-seeking question about \(P, P\)?, the cognitive
state \(C\) of the questioner/agent for whom \(P\) calls for
explanation, the answer \(A\), and the cognitive state
\(C+A\) in which the need for explanation of \(P\) has
disappeared. A related account models unity in the cognitive state in
terms of the comparative increase of coherence and elimination of
spurious unity—such as circularity or redundancy (Schurz 1999).
Unification is also based on information-theoretic transfer or
inference relations. Unification of hypotheses is only a virtue if it
unifies data. The last two conditions imply that unification yields
also empirical confirmation. Explanations are global increases in
unification in the cognitive state of the cognitive agent (Schurz
1999; Schurz and Lambert 1994).

The unification-explanation link can be defended on the grounds that
laws make unifying similarity expectable (hence Hempel-explanatory)
and this similarity becomes the content of a new belief (Weber and Van
Dyck 2002 contra Halonen and Hintikka 1999). Unification is not the
mere systematization of old beliefs. Contra Schurz they argue that
scientific explanation is provided by novel understanding of facts and
the satisfaction of our curiosity (Weber and Van Dyck 2002 contra
Schurz 1999). In this sense, causal explanations, for instance, are
genuinely explanatory and do not require an increase of
unification.

A contextualist and pluralist account argues that understanding is a
legitimate aim of science that is pragmatic and not necessarily
formal, or a subjective psychological by-product of
explanation (De Regt and Dieks 2005). In this view explanatory
understanding is variable and can have diverse forms, such as
causal-mechanical and unification, without conflict (De Regt and Dieks
2005). In the same spirit, Salmon linked unification to the the
epistemic virtue or goal of explanation and distinguished between
unification and causal-mechanical explanation as forms of scientific
explanatory understanding (Salmon 1998).

The views on scientific explanation have evolved away from the formal
and cognitive accounts of the epistemic categories. Accordingly, the
source of understanding provided by scientific explanations has been
misidentified according to some (Barnes 1992). The genuine source for
important, but not all, cases lies, in causal explanation, or causal
mechanism (Cartwright 1983; Cartwright 1989; see also Glennan 1996,
Cat 2005 and Craver 2007). Mechanistic models of explanation have
become entrenched in philosophical accounts of the life sciences
(Darden 2006, Craven 2007). As an epistemic virtue, the role of
unification has been traced to the causal form of the explanation, for
instance, in statistical regularities (Schurz 2015). The challenge
extends to the alleged extensional link between explanation on the one
hand, and truth and universality on the other (Cartwright
1983, Dupré 1993, Woodward 2003). In this sense, explanatory unity,
which rests on metaphysical assumptions about components and their
properties, also involves a form of ontological or metaphysical unity
(for a methodological criticism of external, metaphysical
perspectives, see Ruphy 2016).

Similar criticisms extend to the traditionally formalist arguments in
physics about fundamental levels; there unification fails to yield
explanation in the formal scheme based on laws and their symmetries
(Cat 1998; Cat 2005). Unification and explanation conflict on the
grounds that in biology and physics only causal mechanical
explanations answering why-questions yield understanding of the
connections that contribute to “true unification”
(Morrison
2000;[2]
Morrison’s choice of standard for evaluating the epistemic accounts
of unity and explanation and her focus on systematic theoretical
connections without reduction has not been without critics, e.g.,
Wayne 2002; Plutynski 2005, Karaca
2012).[3]

Methodology. Unity has long been understood as a methodological
principle, primarily, but not exclusively, in reductionist versions
(Wimsatt 1976 and Wimsatt 2006 for the case of biology and Cat 1998
for physics). This is different from the case of unity through
methodological prescriptions. One methodological criterion appeals to
the epistemic virtues of simplicity or parsimony, whether
epistemological or ontological (Sober 2003). As a formal probabilistic
principle of curve-fitting or average predictive accuracy, the
relevance of unity is objective. Unity plays the role of an empirical
background theory.

Evidence. The probabilistic model dovetails with other recent
formal discussions of unity and coherence within the framework of
Bayesianism (Forster and Sober 1994, sect. 7; Schurz and Lambert 2005
is also a formal model, with an algebraic approach). More generally,
the probabilistic framework articulates formal characterizations of
unity and introduces its role in evaluations of evidence. As in the
dual relation to explanation, also in this case, unification is not a
condition for relevant evidence but a criterion of evidence (for a non-probabilistic account of the relation between unification and confirmation, see Schurz 1999). The evidentiary role of unification of hypotheses or models is related, but not reducible, to the evidentiary role of synthesis of data in statistics.

A criterion of unity defended for its epistemic virtue in relation to
evidence is simplicity, or parsimony (Sober 2013 and 2016).
Comparatively speaking, simpler hypotheses, models or theories present
a higher likelihood of truth, empirical support and accurate
prediction. From a methodological standpoint, however, appeals to
parsimony might not be sufficient. Moreover, the
connection between unity as parsimony and likelihood is not
interest-relative, at least in the way that the connection between
unity and explanation is (Sober 2003; Forster and Sober 1994 and Sober
2013 and 2016).

On the Bayesian approach, the rational comparison and acceptance of
probabilistic beliefs in the light of empirical data is constrained by
Bayes’ Theorem for conditional probabilities (where \(h\) and
\(d\) are the hypothesis and the data respectively):

\[
\P(h \mid d) = \frac{\P(d \mid h) \cdot \P(h)}{P(d)}
\]

One explicit Bayesian account of unification as an epistemic,
methodological virtue, has introduced the following measure of unity:
a hypothesis \(h\) unifies phenomena \(p\) and \(q\) to
the degree that given \(h, p\) is
statistically/probabilistically relevant to (or correlated with)
\(q\) (Myrvold 2003; a probabilistically equivalent measure of
unity in Bayesian terms in McGrew 2003; on the equivalence, Schupbach
2005). This measure of unity has been criticized as neither necessary
nor sufficient (Lange 2004; Lange’s criticism assumes the
unification-explanation link; in a rebuttal, Schupbach has rejected
this and other assumptions behind Lange’s criticism; Schupbach 2005).
In a recent development, Myrvold argues for mutual information
unification, i.e., that hypotheses are said to be supported by their
ability to increase the amount of what he calls the mutual information
of the set of evidence statements; see Myrvold 2017. The explanatory
unification contributed by hypotheses about common causes is an
instance of the information condition.

Finally, another kind of formal model for a different kind of unity
straddles the boundary between formal epistemology and ontology:
computational models of emergence or complexity. They are based on
simulations of chaotic dynamical processes such as cellular automata
(Wolfram 1984; Wolfram 2002). Their supposed superiority to
combinatorial models based on aggregative functions of parts of wholes
does not lack defenders (Crutchfield 1994; Crutchfield and Hanson
1997; Humphreys 2004, 2007 and 2008; Humphreys and Huneman 2008;
Huneman 2008a and b and 2010).

Unification without reduction. Reduction is not the sole
standard of unity and models of unification without reduction have
proliferated. In addition, such models introduce in turn new
units of analysis. An early influential account centers around the
notion of interfield theories (Darden and Maull 1977; Darden
2006). The orthodox central place of theories as the unit of scientific
knowledge is replaced by that of fields. Examples of such fields are
genetics, biochemistry and cytology. Different levels of organization
correspond in this view to different fields: Fields are individuated
intellectually by a focal problem, a domain of facts related to the
problem, explanatory goals, methods and a vocabulary. Fields import
and transform terms and concepts from others. The model is based on
the idea that theories and disciplines do not match neat levels of
organization within a hierarchy; rather, many of them in their scope
and development cut across different such levels. Reduction is a
relation between theories within a field, not across fields.

Interdependence and hybridity. In general, the higher-level
theories (for instance, cell physiology) and the lower-level theories
(for instance, biochemistry) are ontologically and epistemologically
inter-dependent on matters of informational content and evidential
relevance; one cannot be developed without the other (Kincaid 1996;
Kincaid 1997; Wimsatt 1976; Spector 1977). The interaction between
fields (through researchers’ judgments and borrowings) may provide
enabling conditions for subsequent interactions. For instance,
Maxwell’s adoption of statistical techniques in color research enabled
the introduction of similar ideas from social statistics in his research
in reductive molecular theories of gases; the reduction, in turn,
enabled experimental evidence from chemistry and acoustics; similarly
different chemical and spectroscopic bases for colors provided
chemical evidence in color research (Cat 2013 and 2014).

The emergence and development of hybrid disciplines and theories are
another instance of non-reductive cooperation or interaction between
sciences. I noted, above, the post-war emergence of interdisciplinary
areas of research, the so-called hyphenated sciences such as neuro-acoustics, radioastronomy, biophysics,
etc. (Klein 1990,
Galison 1997) On a smaller scale, in the domain of, for instance, physics, one can
find semiclassical models in quantum physics or models developed
around phenomena where the limiting reduction relations are singular
or catastrophic (caustic optics and quantum chaos) (Cat 1998;
Batterman 2002; Belot 2005). Such semiclassical explanatory models
have not found successful quantum substitutes and have placed structural
explanations at the heart of the relation between classical and
quantum physics (Bokulich 2008). The general form of pervasive cases
of emergence has been characterized with the notion of contextual
emergence (Bishop and Atmanspacher 2006): properties, behaviors and
their laws on a restricted, lower-level, single-scale, domain are
necessary but not sufficient for the properties, behaviors of another,
e.g., higher-level one, not even of itself. The latter are also
determined by contingent contexts (contingent features of the state
space of the relevant system). The interstitial formation of more or
less stable small-scale syntheses and cross-boundary
“alliances” has been common in most sciences since the
early 20th century. Indeed, it is crucial to development in
model building and growing empirical relevance in fields ranging anywhere from
biochemistry to cell ecology, or from econophysics to thermodynamical
cosmology. Similar cases can be found in chemistry and
the biomedical sciences

Conceptual unity. The conceptual dimension of cross-cutting has
been developed in connection with the possibility of cross-cutting
natural kinds that challenges taxonomical monism. Categories of
taxonomy and domains of description are interest-relative, as are
rationality and objectivity (Khalidi 1998; his view shares positions
and attitudes with Longino 1989; Elgin 1996 and 1997). Cross-cutting
taxonomic systems, then, are not conceptually inconsistent or inapplicable. Both the interest-relativity and hybridity feature prominently in the context of ontological
pluralism (see below).

Another, more general, unifying element of this kind is Holton’s
notion of themata. Themata are conceptual values that are a
priori yet contingent (both individual and social), informing and
organizing presuppositions that factor centrally in the evolution of
the science: continuity/discontinuity, harmony, quantification,
symmetry, conservation, mechanicism, hierarchy, etc. (Holton 1973).
Unity of some kind is itself a thematic element. A more complex and
comprehensive unit of organized scientific practice is the notion of
the various styles of reasoning, such as statistical,
analogical modeling, taxonomical, genetic/genealogical or laboratory
styles; each is a cluster of epistemic standards, questions, tools,
ontology, and self-authenticating or stabilizing protocols (Hacking
1996; see below for the relevance of this account of a priori elements
to claims of global disunity; the account shares distinctive features
of Kuhn’s notion of paradigm).

Another model of non-reductive unification is historical and
diachronic: it emphasizes the genealogical and historical identity of
disciplines, which has become complex through interaction. The
interaction extends to relations between specific sciences, philosophy
and philosophy of science (Hull 1988). Hull has endorsed an image of science as a
process, modeling historical unity after a Darwinian-style pattern of
evolution (developing an earlier suggestion by Popper). Part of the account is the idea of
disciplines as evolutionary historical individuals, which can be revised with
the help of more recent ideas of biological individuality: hybrid
unity as an external model of unity as integration or coordination of
individual disciplines and disciplinary projects, e.g., characterized
by a form of occurrence, evolution or development whose tracking and
identification involves a conjunction with other disciplines, projects
and domains of resources, from within science or outside science. This
diachronic perspective can accommodate models of discovery, in which
genealogical unity integrates a variety of resources that can be both
theoretical and applied, or scientific and non-scientific (an example,
from physics, the discovery of superconductivity, can be found in
Holton, Chang and Jurkowitz 1996). Some models of unity below provide
further examples.

A generalization of the notion of interfield theories is the idea that
unity is interconnection: Fields are unified theoretically
and practically (Grantham 2004). This is an extension of the original
modes of unity or identity that single out individual disciplines.
Theoretical unification involves conceptual, ontological and
explanatory relations. Practical unification involves heuristic
dependence, confirmational dependence and methodological integration.
The social dimension of the epistemology of scientific disciplines
relies on institutional unity. With regard to disciplines as
professions, this kind of unity has rested on institutional
arrangements such as professional organizations for
self-identification and self-regulation, university mechanisms of
growth and reproduction through certification, funding and training,
and communication and record through journals.

Many examples of unity without reduction are local rather than global,
and are not merely a phase in a global and linear project or tradition of
unification (or integration). They are typically focused on science as a human
activity. From that standpoint, unification is typically understood or advocated a piecemeal description and strategy of
collaboration (on the distinction between global integration and local
interdisciplinarity, see Klein 1990). Cases are restricted to specific
models, phenomena or situations.

Material unity. A more recent approach to the connection
between different research areas has focused on a material level of scientific practice, with attention to
the use of instruments and other material objects (Galison 1997, Bowker
and Star 1999). For instance, the material unity of natural
philosophy in the 16th and 17th centuries relied
on the circulation, transformation and application of objects, in
their concrete and abstract representations (Bertoloni-Meli 2006). The
latter correspond to the imaginary systems and their representations,
which we call models. The evolution of objects and images across
different theories and experiments and their developments in
19th-century natural philosophy provide a historical model
of scientific development; but the approach is not meant to illustrate
reductive materialism, since the same objects and models work and are
perceived as vehicles for abstract ideas, institutions, cultures, etc., or prompted by them
(Cat 2013). On one view, objects are regarded as elements in so-called trading
zones (see below) with shifting meanings in the evolution of
20th-century physics, such as with the cloud chamber which
was first relevant to meteorology and next to particle physics
(Galison 1997). Alternatively, material objects have been given the status of boundary
objects, which provide the opportunity for experts from different
fields to collaborate through their respective understanding of the
system in question and their respective goals (Bowker and Star
1999).

Graphic unity. At the concrete perceptual level, recent
accounts emphasize the role of visual representations in the sciences
and suggest what may be called graphic unification of the
sciences. Their cognitive roles, methodological and rhetorical,
include establishing and disseminating facts and their so-called
virtual witnessing, revealing empirical relations, testing their fit
with available patterns of more abstract theoretical relations
(theoretical integration), suggesting new ones, aiding in
computations, serving as aesthetic devices, etc. But these uses are
not homogeneous across different sciences and make visible
disciplinary differences. We may equally speak of graphic pluralism.
The rates in the use of diagrams in research publications appear to
vary along the hard-soft axis of pyramidal hierarchy, from physics,
chemistry, biology, psychology, economics and sociology and political
science (Smith et al. 2000): the highest use can be found in physics,
intuitively identified by the highest degree of hardness understood as
consensus, codification, theoretical integration and factual stability
to highest interpretive and instability of results. Similarly, the
same variation occurs among sub-disciplines within each discipline. The
kinds of images and their contents also vary across disciplines and
within disciplines, ranging from hand-made images of particular
specimens to hand-made or mechanically generated images of particulars
standing in for types, to schematic images of geometric patterns in
space or time, or to abstract diagrams representing quantitative
relations. Importantly, graphic tools circulate like other cognitive
tools between areas of research that they in turn connect (Galison
1997, Daston and Galison 2007, Lopes 2009; see also Lynch and Woolgar
1990; Baigrie 1996; Jones and Galison 1998; Galison 1997; Cat 2001,
2013 and 2014; and Kaiser 2005).

Disciplinary unity and collaboration. A field of study has
focused on disciplines broadly and their relations. Disciplines
constitute a broader unity of analysis of connection in the sciences
that is characterized, for instance, by their domain of inquiry,
cognitive tools and social structure (Bechtel 1987). Unification of
disciplines, in that sense, can be interdisciplinary,
multidisciplinary, crossdisciplinary and
transdisciplinary (Klein 1990, Kellert 2008, Repko 2012). It
might involve a researcher borrowing from different disciplines or the
collaboration of different researches. Neither modality of
connection amounts to a straightforward generalization of, or reduction
to any single discipline, theory, etc. In either case, the strategic
development is typically defended for its heuristic problem-solving or
innovative powers, as it is prompted by a problem considered complex
in that it does not arise or cannot be fully treated within the
purview of one specific discipline unified or individuated around some
potentially non-unique set of elements such as scope of empirical
phenomena, rules, standards, techniques, conceptual and material
tools, aims, social institutions, etc. Indicators of disciplinary
unity may vary (Kuhn 1962, Klein 1990, Kellert 2008).
Interdisciplinary research or collaboration creates a new
discipline or project, such as interfield research, often leaving the
existence of the original ones intact. Multidisciplinary work
involves the juxtaposition of the treatments and aims of the different
disciplines involved in addressing a common problem.
Crossdisciplinary work involves borrowing resources from one
discipline to serve the aims of a project in another.
Transdisciplinary work is a synthetic creation that
encompasses work from different disciplines (Klein 1990, Kellert 2008,
Brigandt 2010, Hoffmann, Schmidt and Nersessian 2012, Osbeck et al
2011, Repko 2012). These different modes of synthesis or connection
are not mutually exclusive.

In this context, methodological unity often takes the form of borrowing standards and techniques for the application of formal and empirical methods. They range from calculational techniques and tools for theoretical modeling and simulation of phenomena to techniques for modeling of data, use of instruments and conducting experiments (e.g., the culture of field experiments and, more recently, randomized control trials across natural and social sciences). A key element of scientific practice often ignored by philosophical
analysis is expertise. As part of different forms of methodological unity, it is key to the acceptance and successful appropriation of techniques. Recent accounts of multidisciplinary
collaboration as a human activity have focused on the dynamics of
integrating different kinds of expertise around common systems or
goals of research (Collins and Evans 2007, Gorman 2002). The same perspective can accommodate the recent interest in so-called mixed methods, e.g., different forms of integration of quantitative and qualitative methods and approaches in the social sciences.

A general model of local interconnection which has acquired widespread
attention and application in different sciences is the anthropological
model of trading zone, where hybrid languages and meanings
are developed that allow for interaction without straightforward
extension of any party’s original language or framework (Galison
1997). Galison has applied this kind of anthropological analysis to the
subcultures of experimentation. This strategy aims to explain the
strength, coherence and continuity of science in terms of local
coordinations of intercalated levels of symbolic procedures
and meanings, instruments and arguments.

At the experimental level, instruments, as found objects, acquire new
meanings, developments and uses as they bridge over the transitions
between theories, observations or theory-laden observations.
Instruments and experimental projects in the case of Big Science also
bring together, synchronically and interactively, the skills,
standards and other resources from different communities, and change
each in turn (on interdisciplinary experimentation see also Osbeck et
al. 2011). Patterns of laboratory research are shared by the different
sciences, not just instruments but general strategies of
reconfiguration of human researchers and natural entities researched
(Knorr-Cetina 1992), statistical standards (e.g., statistical significance) and ideals of replication. At the same time, attention has been paid to the
different ways on which experimental approaches differ among the
sciences (Knorr-Cetina 1992, Guala 2005, Weber 2005) but also to how they have been transferred (e.g., field experiments and randomized control trials) or integrated (e.g., mixed methods combining quantitative and qualitative techniques).

Empirical work in sociology and cognitive psychology on scientific
collaboration has led to a broader perspective including a number of
dimensions of interdisciplinary cooperation, involving identification
of conflicts and the setting of sufficient so-called common ground
integrators: for instance, shared—pre-existing, revised and
newly developed— concepts, terminology, standards, techniques,
aims, information, tools, expertise, skills (abstract, dialectical,
creative and holistic thinking), cognitive and social ethos
(curiosity, tolerance, flexibility, humility, receptivity,
reflexivity, honesty, team-play) social interaction, institutional
structures and geography (Cummings and Kiesler 2005, Klein 1990,
Kockelmans 1979, Repko 2012). Sociological studies of scientific
collaboration can in principle place the connective models of unity
within the more general scope of social epistemology, for instance, in
relation to distributive cognition (beyond the focus on strategies of
consensus within communities).

The broad and dynamical approach to processes of interdisciplinary
integration may effectively be understood to describe the production
of different sorts and degrees of epistemic emergence. The integrated
accounts require shared (old or new) assumptions and may involve a
case of ontological integration, for instance in causal models.
Suggested kinds of interdisciplinary causal-model integration are the
following: sequential causal order in a process or mechanism cutting
across disciplinary divides; horizontal parallel integration of
different causal models of different elements of a complex phenomenon;
horizontal joint causal model of the same effect; and vertical or
cross-level causal integration (see emergent or top-down causality,
below) (Repko 2012, Kockelmans 1979).

Talk of cooperation and coordination for the purpose of forming hybrid
cross-disciplines, emergent disciplines or projects and products
revolves often around two issues: conflicts and the challenge of
striking a balance between cooperation and autonomy. By extension of
the discussion of value conflict in moral and political philosophy,
one must acknowledge the extent to which scientific practice is based
on accepting limited conflict over necessary commitments and making
epistemic and/or non-epistemic compromises (a volitional, not just
cognitive aspect; on this view against unity as social consensus, see
Rescher 1993, Cat 2005 and 2010; van Bouwel 2009; comp Repko 2012;
Hoffmann, Schmidt and Nersessian 2012).

Aesthetic value. Finally, epistemic values of unity may rely on
subsidiary considerations of aesthetic value. Nevertheless,
consideration of beauty, elegance or harmony may also provide autonomous grounds
for adopting or pursuing varieties of unification in terms of
simplicity and patterns of order (regularity of specific relations)
(McAllister 1996, Glynn 2010 and Orrell 2012). Whether aesthetic
judgements have any epistemic import depends on metaphysical,
cognitive or pragmatic assumptions.

4. Ontological unities

4.1 Ontological unities and reduction

Since Nagel’s influential model of reduction by derivation most
discussions of unity of science have been cast in terms of reductions
between concepts, the entities they describe, and between theories
incorporating the descriptive concepts. Ontological unity is expressed
by a preferred set of such ontological units. In terms of concepts featured
in preferred descriptions, explanatory or not, reduction endorses
taxonomical monism, a privileged set of kinds of things. These
privileged kinds are often known as so-called natural kinds, although
the notion admits of multiple interpretations, ranging from the more
conventionalist to the more essentialistic. Regardless, the
fundamental units are ambiguous with respect to their status as either
entity or property. Reduction may determine the fundamental kinds or
level through the analysis of entities. A distinctive ontological
model is this: The hierarchy of levels of reduction is fixed by
part-whole relations. The levels of aggregation of entities
run all the way down to atomic particles and field parts, rendering
microphysics the fundamental science.

A classic reference in this kind, away from the syntactic model, is
Oppenheim and Putnam’s “The Unity of Science as a Working
Hypothesis” (Oppenheim and Putnam 1958; Oppenheim and Hempel had
worked in the 1930s on taxonomy and typology, a question of broad
intellectual, social and political relevance in Germany at the time).
Oppenheim and Putnam intended to articulate an idea of science as a
reductive unity of concepts and laws to those of the most elementary
elements. They also defended it as an empirical hypothesis—not
an a priori ideal, project or precondition—about science. Moreover, they
claimed that its evolution manifested a trend in that unified direction
out of the smallest entities and lowest levels of aggregation. In an
important sense, the evolution of science recapitulates, in the
reverse, the evolution of matter, from aggregates of elementary
particles to the formation of complex organisms and species (we find a
similar assumption in Weinberg’s downward arrow of explanation).
Unity, then, is manifested not just in mereological form, but
also diachronically, genealogically or historically.

A weaker form of ontological reduction advocated for the biomedical
sciences with the causal notion of partial reductions:
explanations of localized scope (focused on parts of higher-level
systems only) laying out a causal mechanism connecting different
levels in the hierarchy of composition and organization (Schaffner
1993; Schaffner 2006; Scerri has similarly discussed degrees of
reduction in Scerri 1994). An extensional, domain-relative approach
introduces the distinction between “domain preserving” and
“domain combining” reductions. Domain-preserving
reductions are intra-level reductions and occur between
\(T_1\) and its predecessor \(T_2\). In this
parlance, however, \(T_2\) “reduces” to
\(T_1\). This notion of “reduction” does not
refer to any relation of explanation (Nickles 1973).

The claim that reduction, as a relation of explanation, needs to be a
relation between theories or even involve any theory has also been
challenged. One such challenge focuses on “inter-level”
explanations in the form of compositional redescription and
causal mechanisms (Wimsatt 1976). The role of biconditionals or even
Schaffner-type identities, as factual relations, is heuristic (Wimsatt
1976). The heuristic value extends to the preservation of the
higher-level, reduced concepts, especially for cognitive and pragmatic
reasons, including reasons of empirical evidence. This amounts to
rejecting the structural, formal approach to unity and reductionism favored by the logical-positivist tradition.
Reductionism is another example of the functional, purposive nature of
scientific practice. The metaphysical view that follows is a pragmatic
and non-eliminative realism (Wimsatt 2006). As a heuristic, this kind
of non-eliminative pragmatic reductionism is a complex stance. It is, across levels,
integrative and intransitive, compositional, mechanistic and
functionally localized, approximative and abstractive. It is bound to adopting
false idealizations, focusing on regularities and stable common
behavior, circumstances and properties. It is also constrained in its
rational calculations and methods, tool-binding, and problem-relative.
The heuristic value of eliminative inter-level reductions has been
defended as well (Poirier 2006).

The appeal to formal laws and deductive relations is dropped for sets
of concepts or vocabularies in the replacement analysis
(Spector 1978). This approach allows for talk of entity reduction or
branch reduction, and even direct theory replacement without the
operation of laws, and circumvents vexing difficulties raised by
bridge principles and the deductive derivability condition
(self-reduction, infinite regress, etc). Formal relations only
guarantee, but do not define, the reduction relation. Replacement
functions are meta-linguistic statements. Like Sellars had argued in
the case of explanation, this account distinguishes between reduction
and testing of reduction, and highlights the role of derivations in both. Finally,
replacement can be in practice or in theory. Replacement in practice
does not advocate elimination of the reduced or replaced entities or
concepts (Spector 1978).

Note, however, the following: the compartmentalization of theories and
their concepts or vocabulary into levels neglects the existence of
empirically meaningful and causally explanatory relations between
entities or properties at different levels. If they are neglected as
theoretical knowledge and left outside as only bridge principles, the
possibility of completeness of knowledge is jeopardized.
Maximizing completeness of knowledge here requires a descriptive unity of all
phenomena at all levels and anything between these levels. Any bounded
region or body of knowledge neglecting such cross-boundary
interactions is radically incomplete, and not just confirmationally or
evidentially so; we may refer to this problem as the problem of
cross-boundary incompleteness as either intra-level or
horizontal incompleteness and, on a hierarchy, the problem of
inter-level or vertical incompleteness (Kincaid 1997; Cat
1998).

The most radical form of reduction as replacement is often called
eliminativism. The position has made a considerable impact in
philosophy of psychology and philosophy of mind (Churchland 1981;
Churchland 1986). On this view the vocabulary of the reducing theories
(neurobiology) eliminates and replaces that of the reduced ones
(psychology), leaving no substantive relation between them (which is
only a replacement rule) (see also
eliminative materialism).

In a general semantic account, Sarkar distinguishes different kinds of
reduction in terms of four criteria, two epistemological and two
ontological: fundamentalism, approximation, abstract hierarchy and
spatial hierarchy. Fundamentalism implies that the features
of a system can be explained in terms only of factors and rules from
another realm. Abstract hierarchy is the assumption that the
representation of a system involves a hierarchy of levels of
organization with the explanatory factors being located at the lower
levels. Spatial hierarchy is a special case of abstract
hierarchy in which the criterion of hierarchical relation is a spatial
part-whole or containment relation. Strong reduction satisfies the
three “substantive” criteria, whereas weak reduction only
satisfies fundamentalism. Approximate reductions—strong and
hierarchical—are those which satisfy the criterion of
fundamentalism only approximately (Sarkar 1998; the merit of Sarkar’s
proposal resides in its systematic attention to hierarchical
conditions and, more originally, to different conditions of
approximation; see also Ramsey 1995; Lange 1995; Cat
2005).

The semantic turn extends to more recent notion of models that do not
fall under the strict semantic or model-theoretic notion of
mathematical structures (Giere 1999; Morgan and Morrison 1999; Cat
2005). This is a more flexible framework about relevant formal
relations and the scope of relevant empirical situations; and it is
implicitly or explicitly adopted by most accounts of unity without
reduction. One may add the primacy of temporal representation and
temporal parts, temporal hierarchy or temporal
compositionality, first emphasized by Oppenheim and Putnam as a
model of genealogical or diachronic unity. This framework applies to
processes both of evolution and development (a more recent version in
McGivern 2008 and Love and Hütteman 2011).

The shift in the accounts of scientific theory from syntactic to
semantic approaches has changed conceptual perspectives and,
accordingly, formulations and evaluations of reductive relations and
reductionism. However, examples of the semantic approach focusing on
mathematical structures and satisfaction of set-theoretic relations
have focused on syntactic features—including the axiomatic form
of a theory—in the discussion of reduction (Sarkar 1998, da
Costa and French 2003). In this sense, the structuralist approach can
be construed as a neo-Nagelian account, while an alternative line of research
has championed the more traditional structuralist semantic
approach (Balzer and Moulines 1996; Moulines 2006; Ruttkamp 2000;
Ruttkamp and Heidema 2005).

4.2 Ontological unities and antireductionism

Headed in the opposite direction, arguments concerning new concepts
such as multiple realizability and supervenience, introduced by
Putnam, Kim, Fodor and others, have led to talk of higher-level functionalism, a
distinction between type-type and token-token reductions and the
examination of its implications. The concepts of emergence,
supervenience and downward causation are related metaphysical tools
for generating and evaluating proposals about unity and reduction in
the sciences. This literature has enjoyed its chief sources and
developments in general metaphysics and in philosophy of mind and
psychology (Davidson 1969; Putnam 1975; Fodor 1975; Kim 1993).

Supervenience, first introduced by Davidson in discussions of
mental properties, is the notion that a system with properties on one
level is composed of entities on a lower level and that its properties
are determined by the properties of the lower-level entities or
states. The relation of determination is that no changes at the
higher-level occur without changes at the lower level. Like
token-reductionism, supervenience has been adopted by many as the poor
man’s reductionism (see the entry on
supervenience).
A different case for the autonomy of the macrolevel is based on the
notion of multiple supervenience (Kincaid 1997; Meyering 2000).

The autonomy of the special sciences from physics has been defended in
terms of a distinction between type-physicalism and
token-physicalism (Fodor 1974; Fodor countered Oppenheim and
Putnam’s hypothesis under the rubric “the disunity of
science”; the entry on
physicalism).
The key logical assumption is the type-token distinction, that types
are realized by more specific tokens, e.g., the type animal is instantiated
by different species, the type tiger or electron can be instantiated
by multiple individual token tigers and electrons. Type-physicalism is
characterized by a type-type identity between the
predicates/properties in the laws of the special sciences and those of
physics. By contrast, token-physicalism is based on the token-token
identity between the predicates/properties of the special sciences and
those of physics; every event under a special law falls under a law of
physics and bridge laws express contingent token-identities between
events. Token-physicalism operates as a demarcation criterion for
materialism. Fodor argued that the predicates of the special sciences
correspond to infinite or open-ended disjunctions of physical
predicates, and these disjunctions do not constitute natural kinds
identified by an associated law. Token-physicalism is the only
alternative. All special kinds of events are physical but the special
sciences are not physics (for criticisms based on the presuppositions
in Fodor’s argument, see Sober 1999).

The denial of remedial, weaker forms of reductionism is the basis for
the concept of emergence (Humphreys 1997, Bedau and Humphreys
2008). Different accounts have attempted to articulate the idea of a
whole being different from or more than the mere sum of its parts (see
the entry on
emergent properties).
Emergence has been described beyond logical relations, synchronically
as an ontological property and diachronically as a material process of
fusion, in which the powers of the separate constituents lose their
separate existence and effects (Humphreys 1997). This concept has been
widely applied in discussions of complexity (see below).
Unlike the earliest antireductionist models of complexity in terms of
holism and cybernetic properties, more recent approaches track the
role of constituent parts (Simon 1996). Weak emergence has been
opposed to nominal and strong forms of emergence. The nominal kind
simply represents that some macro-properties cannot be properties of
micro-constituents. The strong form is based on supervenience and
irreducibility, with a role for the occurrence of autonomous downwards causation upon any
constituents (see below). Weak emergence is linked to processes
stemming from the states and powers of constituents, with a reductive
notion of downwards causation of the system as a resultant of
constituents’ effects; yet the connection is not a matter of Nagelian
formal derivation, but of implementation through, for instance,
computational aggregation and iteration. Weak emergence, then, can be
defined in terms of simulation: a macro-property, state or fact is
weakly emergent if and only if it can be derived from its
macro-constituents only by simulation (Bedau 2008) (see entry on
simulations in science).

Connected to the concept of emergence is top-down or
downward causation. It captures the autonomous and genuine
causal power of higher-level entities or states, especially upon
lower-level ones. The most extreme and most controversial version
include a violation of laws that regulate the lower-level (Meehl and
Sellars 1956; Campbell 1974). Weaker forms require compatibility with
the microlaws (for a brief survey and discussion see Robinson 2005; on
downward causation without top-down causes, see Craver and Bechtel
2007, Bishop 2012). The very concept has become the subject of some
interdisciplinary interest in the sciences (Ellis, Noble and O’Connor
2012).

Another general argument for the autonomy of the macrolevel in the
form of non-reductive materialism has been a cognitive type of
functionalism, namely, cognitive pragmatism (Van Gulick 1992). This
account links ontology to epistemology. It discusses four pragmatic
dimensions of representations: the nature of the causal interaction
between theory-user and the theory, the nature of the goals to whose
realization the theory can contribute, the role of indexical elements
in fixing representational content, and differences in the
individuating principles applied by the theory to its types (Wimsatt
and Spector’s arguments above are of this kind). A more ontologically
substantive account of functional reduction is Ramsey’s bottom-up
construction by reduction: transformation reductions
streamline formulations of theories in such a way that they extend
basic theories upwards by engineering their application to specific
context or phenomena. As a consequence, they reveal, by construction,
new relations and systems that are antecedently absent from a
scientist’s understanding of the theory—independently of a top
or reduced theory (Ramsey 1995). A weaker framework of ontological
unification is categorial unity, wherein abstract categories
such as causality, information, etc, are attached to the
interpretation of the specific variables and properties in models of
phenomena (see Cat 2000, 2001 and 2006).

5. Disunity

A more radical departure from logical-positivist standards of unity is the recent criticism of the methodological
values of reductionism and unification in the sciences and also its
position in culture and society. From the descriptive standpoint, many
views under the rubric of disunity are versions of positions mentioned
above. The difference is mainly normative and a matter of emphasis,
perspective, and stance. This view argues for the replacement of the
emphasis on global unity—including unity of method—by
emphasizing disunity and epistemological and ontological
pluralism.

5.1 The Stanford School

An influential picture of disunity comes from related works by the members of the so-called Stanford
School, e.g., John Dupré, Ian Hacking, Peter Galison, Patrick
Suppes and Nancy Cartwright. Disunity is, in general terms, a
rejection of universalism and uniformity both methodological and
metaphysical. While the view can be constructed in terms of specific
anti-reductionistic claims and positions, they share an emphasis on
the rejection of restrictive accounts of unity. Through their work, the
rubric of disunity has acquired a visibility parallel to the one once
acquired by unity, as an inspiring philosophical rallying cry.

From a metaphysical point of view, the disunity of science can be
given adequate metaphysical foundations that make pluralism compatible
with realism (Dupré 1993). Dupré opposes a mechanistic
paradigm of unity characterized by determinism, reductionism and
essentialism. The paradigm spreads the values and methods of physics
to other sciences that he thinks are scientifically and socially
deleterious. Disunity appears characterized by three pluralistic theses:
against essentialism, there is always a plurality of classifications
of reality into kinds; against reductionism, there exists equal
reality and causal efficacy of systems at different levels of
description, that is, the microlevel is not causally complete, leaving
room for downward causation; and against epistemological monism, there
is no single methodology that supports a single criterion of
scientificity, nor a universal domain of its applicability, only a
plurality of epistemic and non-epistemic virtues. The unitary concept
of science should be understood, following the later Wittgenstein, as
a family-resemblance concept (For a criticism of Dupré’s ideas,
see Mitchell 2003 and Sklar 2003).

Against the universalism of explanatory laws, Cartwright has argued
that laws cannot be both universal and true, as Hempel required in his influential account of explanation and demarcation; there exist only
patchworks of laws and local cooperation. Like Dupré,
Cartwright adopts a kind of scientific realism but denies that there
is a universal order, whether represented by a theory of everything or
a corresponding a priori metaphysical principle (Cartwright 1983). The
empirical evidence, she argues, along the same lines as Wimsatt,
suggests far more strongly the idea of a dappled world, best
represented by a patchwork of laws, often in local cooperation (e.g.,
local identifications, causal interactions, joint actions and
piecemeal corrections and correlations). Theories apply only where and
to the extent that their interpretive models fit the phenomena studied
(Cartwright 1999). But this is not their alleged universal factual
scope. They only hold in special conditions like ceteris
paribus. Cartwright’s pluralism is not just opposed to vertical
reductionism but also horizontal imperialism, or universalism and
globalism. She explains their more or less general domain of
application in terms of causal capacities and arrangements she calls
nomological machines (Cartwright 1989; Cartwright 1999). The
regularities they bring about depend on a shielded environment. As a
matter of empiricism, this is the reason that it is in the controlled
environment of laboratories and experiments, where causal interference
is shielded off, that factual regularities are manifested. The
controlled, stable regular world is an engineered world.
Representation rests on intervention (comp. Hacking 1983). On these
grounds, as a matter of holism Cartwright rejects strong distinctions between
natural and social sciences, and like Otto Neurath, between the natural and
the social world. Whether as a hypothesis or as an ideal, the debates
continue over the form, scope and significance of unification in the
sciences. Cartwright’s theses and arguments rest on numerous
assumptions that have been target of insightful criticism (Winsberg et
al. 200,; Hoefer 2003, Sklar 2003, Howhy 2003, Teller 2004, McArthur
2006 and Ruphy 2016).

Disunity and autonomy of levels have been associated, conversely, with
antirealism, meaning instrumentalist or empiricist heuristics. This
includes, for Fodor and Rosenberg, higher-level sciences such as
biology and sociology (Fodor 1974; Rosenberg 1994; Huneman 2010). It is against this picture that
Dupré’s and Cartwright’s attacks on uniformly global
unity and reductionism, above, might seem surprising by including an endorsement, in causal
terms, of
realism.[4]
Rohrlich has defended a similar realist position about weaker,
conceptual (cognitive) antireductionism, although on the grounds of
the mathematical success of derivational explanatory reductions
(Rohrlich 2001). Ruphy, however, has argued that antireductionism
merely amounts to a general methodological prescription and is too
weak to yield uncontroversial metaphysical lessons; these are in fact
based on general metaphysical commitments external to scientific
practice (Ruphy 2005 and 2016).

5.2 Pluralism. The Minnesota School

The question of the metaphysical significance of disunity and
anti-reductionism takes one straight to the larger issue of the
epistemology and metaphysics (and aesthetics, social culture and
politics) of pluralism. And here one encounters the familiar
issues and notions such as conceptual schemes, frameworks and worldviews,
incommensurability, relativism, contextualism and perspectivalism (for
a general discussion see Lynch 1998; on perspectivalism about
scientific models see Giere 1999 and Rueger 2005). In connection with
relativism and instrumentalism, pluralism has typically been
associated with antirealism about taxonomical practices. But it has
been defended from the standpoint of realism (for instance,
Dupré 1993 and Chakravartty 2011). Pluralism about knowledge of
mind-independent facts can be formulated in terms of different ways of
to distribute properties (sociability-based pluralism), with more
specific commitments about the ontological status of the related
elements and their plural contextual manifestations of powers or
dispositions (Chakravartty 2011, Cartwright 2007).

Pluralism applies widely to concepts, explanations, virtues, goals,
methods, models, and kinds of representations (see above for graphic
pluralism), etc. In this sense, pluralism has been defended as a
general framework that rejects the ideal of consensus in cognitive,
evaluative and practical matters, against pure skepticism (nothing
goes) or indifferentism (anything goes), including a defense of
preferential and contextual rationality that notes the role of
contextual rational commitments, by analogy with political forms of
engagement (Rescher 1993, van Bouwel 2009, Cat 2012).

Consider at least four distinctions—they are formulated about
concepts, facts, and descriptions, and they apply also to values,
virtues, methods, etc:

Vertical vs. horizontal pluralism. Vertical pluralism is
inter-level pluralism, the view that there is more than one level of
factual description or kind of fact and that each is irreducible, equally
fundamental, or ontologically/conceptually autonomous. Horizontal
pluralism is intra-level pluralism, the view that there may be
incompatible descriptions or facts on the same level of discourse
(Lynch 1998). For instance, the plurality of explanatory causes to be
chosen from or integrated in biology or physics has been defended as
lesson in pluralism (Sober 1999).

Global vs. local pluralism. Global pluralism is pluralism
about every type of fact or description. Global horizontal pluralism
is the view that there may be incompatible descriptions of the same
type of fact. Global vertical pluralism is the view that no type of
fact or description reduces to any other. Local horizontal and
vertical pluralism are about one type of fact or description (Lynch
1998).

Isolationist vs. integrative pluralism. Isolationist
pluralism is about underdetermination; about the choice from a
disjunction of equivalent types of descriptions (Mitchell) or of
incompatible partial representations or models of phenomena in the
same intended scope (Longino); the representational incompatibility
may be traced to competing values or aims, or assumptions in ceteris
paribus laws. It is the most common situation in the sciences.
Integrative pluralism is the conjunctive or holistic requirement of
different types of descriptions or facts (Mitchell 2003 and 2009;
contrast with the more isolationist position in Longino 2002, her
essay in Kellert, Longino and Waters 2006, and Longino 2013). In the same spirit, i have mentioned, for instance, the case of mixed methods integrating qualitative and quantitative techniques. This position is analogous
to agonistic engagement in political models of deliberative democracy,
between the extremes of so-called consensual mainstreaming and
antagonistic exclusivism (van Bouwel 2009). Each can be vertical or
horizontal (see the discussion of interdisciplinary integration,
above).

Internal vs. external pluralism. From a methodological
standpoint, an internal perspective is naturalistic in its reliance on
the contingent plurality of scientific practice by any of its
standards. This has been defended by members of the so-called
Minnesota School (Kellert, Longino and Waters 2006) and Ruphy (Ruphy
2016). The alternative, which Ruphy has attributed to Dupré and
Cartwright is the adoption of a metaphysical commitment external to
actual scientific practice.

These distinctions can accommodate a number of epistemic and
metaphysical pluralist accounts including different versions of
taxonomical pluralism. These range from the more conventional and
contingent (from Elgin 1997 to astronomical kinds in Ruphy 2016), the
more grounded in contexts of practices (categorization work in Bowker
and Star 1999 or quantitative kinds in Cat 2016 and in the life
sciences and chemistry in Kendig 2016 ) and the interactive (Hacking’s
interactive kinds in the human sciences) to the more metaphysically
substantive. From a methodological standpoint, to the distinctions
above we can add the distinction between descriptive and evaluative
attitudes to pluralism, and contrast them further with the activist
approach (defended by Chang in Chang 2012) encouraging plurality where
productive (Chang focuses on the experimental reactivation of
historically abandoned programs). As Neurath’s discussion of unity
suggested, also discussions of pluralism are matters of social
epistemology, with social and political correlates and consequences,
for instance regarding issues of toleration and democracy.

5.3 Metapluralism

The preference for one kind of pluralism over another is typically
motivated by epistemic virtues or constraints. Meta-pluralism,
pluralism about pluralism, is obviously conceivable in similar terms,
as it can be found in the formulation of the so-called pluralist
stance (Kellert, Longino and Waters 2006). The pluralist stance
replaces metaphysical principles with scientific, or empirical,
methodological rules and aims that have been “tested”.
Like Dupré’s and Cartwright’s metaphysical positions, its
metascientific position must be empirically tested. Metascientific
conclusions and assumptions cannot be considered universal or
necessary, but local and contingent, relative to scientific interests
and purposes. Thus, on this view, complexity does not always require
interdisciplinarity (Kellert 2008); and in some situations the
pluralist stance will defend reductions or specialization over
interdisciplinary integration (Kellert, Longino and Waters 2006, Cat
2010 and 2012, Rescher 1993).

6. Conclusion: Why unity? And what difference does it really make?

Views on matters of unity and unification make a difference in both
science and philosophy. In science they provide strong heuristic or
methodological guidance and even justification for hypotheses,
projects, and specific goals. In this sense, different rallying cries
and idioms such as simplicity, unity, disunity, emergence or
interdisciplinarity, have been endowed with a normative value. Their
evaluative role extends broadly. They are used to provide legitimacy,
even if rhetorically, in social contexts especially in situations involving sources of funding and
profit. They set a standard of what carries the authority and
legitimacy of what it is to be scientific. As a result, they make a
difference in scientific evaluation, management and application,
especially in public domains such as healthcare and economic
decision-making. For instance, pointing to the complexity of causal
structures challenges traditional deterministic or simple causal
strategies of policy decision-making with known risks and unknown
effects of known properties (Mitchell 2009). Last but not least is the
influence that implicit assumptions about what unification can do
have on science education (Klein 1990).

Philosophically, assumptions about unification help choose what sort
of philosophical questions to pursue and what target areas to explore.
For instance, fundamentalist assumptions typically lead one to address
epistemological and metaphysical issues in terms of only results and
interpretations of fundamental levels of disciplines. Assumptions of
this sort help define what counts as scientific and shape scientistic
or naturalized philosophical projects. In this sense, they determine,
or at least strongly suggest, what relevant science carries authority
in philosophical debate.

At the end of the day one should not lose sight of the larger context
that sustains problems and projects in most disciplines and practices.
We are as free to pursue them as Kant’s dove is free to fly, that is, not
without the surrounding air resistance to flap its wings upon and
against. Philosophy was once thought to stand for the systematic unity
of the sciences. The foundational character of unity became the
distinctive project of philosophy, in which conceptual unity played
the role of the standard of intelligibility. In addition, the ideal of
unity, frequently under the guise of harmony, has long been a standard
of aesthetic virtue (This image has been eloquently challenged by, for
instance, John Bailey and Iris Murdoch; Bailey 1976; Murdoch 1992).
Unities and unifications help us meet cognitive and practical demands
upon our life as well as cultural demands upon our self-images that
are both cosmic and earthly. It is not surprising that talk of the
many meanings of unity, namely, fundamental level, unification,
system, organization, universality, simplicity, atomism, reduction,
harmony, complexity or totality, can bring an urgent grip on our
intellectual imagination.

Bibliography

Auyang, S., 1995, How is Quantum Field Theory Possible?
New York: Oxford University Press.

–––, 2006, “On fuzzy empiricism and
fuzzy-set models of causality: What is all the fuzz about?”,
Philosophy of Science 73, n. 1, January 2006.

–––, 2010, “On Reduction: Analyzing
Theories and Synthesizing Models; Cooperation and Compromises”,
Discussion Papers Series, Centre for the Philosophy of the
Natural and Social Sciences, London School of Economics, London.

Meehl, P. and W. Sellars, 1956, “The Concept of
emergence,” in H. Feigl (ed.), The Foundations of Science
and the Concepts of Psychology and Psychoanalysis, Minneapolis:
University of Minnesota Press, 239–252.

Meyering, T.C., 2000, “Physicalism and downward causation in
psychology and the special sciences”, Inquiry 43:
181–202.

van Gulick, R., 1992, “Nonreductive materialism and the
nature of intertheoretical constraint”, in Beckmann, Flohr and
Kim, Emergence or Reduction? Essays on the Prospects of
Nonreductive Physicalism, New York: de Gruyter.