Much attention has been drawn to the cognitive basis of innovation. While interesting in many ways, this poses the threat of falling back to traditional internalist assumptions with regard to cognition. We oppose the ensuing contrast between internal cognitive processing and external public practices and technologies that such internal cognitive systems might produce and utilize. We argue that innovation is best understood from the gibsonian notion of affordance, and that many innovative practices emerge from the external scaffolding of cognitive processes. (...) The public engageability that allows the disclosure of hidden affordances is not only –not even primarily– a property of cognitive products, but of cognitive processes. We elaborate on this claims by drawing on Dutilh Novaes’ account of formal languages as cognitive technologies and Hutto’s Narrative Practice Hypothesis. This paves the way to sketch some general principles on how to strategically seek for innovation by targeting hidden affordances. (shrink)

An intricate, long, and occasionally heated debate surrounds Boltzmann’s H-theorem (1872) and his combinatorial interpretation of the second law (1877). After almost a century of devoted and knowledgeable scholarship, there is still no agreement as to whether Boltzmann changed his view of the second law after Loschmidt’s 1876 reversibility argument or whether he had already been holding a probabilistic conception for some years at that point. In this paper, I argue that there was no abrupt statistical turn. In the first (...) part, I discuss the development of Boltzmann’s research from 1868 to the formulation of the H-theorem. This reconstruction shows that Boltzmann adopted a pluralistic strategy based on the interplay between a kinetic and a combinatorial approach. Moreover, it shows that the extensive use of asymptotic conditions allowed Boltzmann to bracket the problem of exceptions. In the second part I suggest that both Loschmidt’s challenge and Boltzmann’s response to it did not concern the H-theorem. The close relation between the theorem and the reversibility argument is a consequence of later investigations on the subject. (shrink)

The book revives the neutral monism of Mach, James, and Russell and applies the updated view to the problem of redefining physicalism, explaining the origins of sensation, and the problem of deriving extended physical objects and systems from an ontology of events.

One of the liveliest debates about cognition concerns whether our cognition sometimes extends beyond our brains and bodies. One party says Yes, another No. This paper shows that debate between these parties has been epistemologically confused and requires reorienting. Both parties frequently appeal to empirical considerations and to extra-empirical theoretical virtues to support claims about where cognition is. These things should constrain their claims, but cannot do all the work hoped. This is because of the overlooked fact, uncovered in this (...) paper, that we could never distinguish the rival views empirically or by typical theoretical virtues. I show this by drawing on recent work on testing, predictive accuracy, and theoretical virtues. The recommendation to emerge is that we step back from debate about where cognition is, to the epistemology of what cognition is. (shrink)

I defend a pragmatist reinterpretation of Sellars’s famous manifest-scientific distinction. I claim that in order to do justice to this important distinction we must first recognize, despite what philosophers—including, arguably, Sellars—often make of it, that the distinction does not draw an epistemological or metaphysical boundary between different kinds of objects and events, but a pragmatic boundary between different ways in which we interact with objects and events. Put differently, I argue that the manifest-scientific distinction, in my view, can be best (...) understood, not as a metaphysical distinction between apparent and real objects and events, or an epistemological distinction between perceptible and imperceptible objects and events, but rather as a distinction, which is not necessarily rigid over time, between distinct ways in which we collectively deal, in practice, with objects and events. (shrink)

What makes beliefs thrive? In this paper, we model the dissemination of bona fide science versus pseudoscience, making use of Dan Sperber's epidemiological model of representations. Drawing on cognitive research on the roots of irrational beliefs and the institutional arrangement of science, we explain the dissemination of beliefs in terms of their salience to human cognition and their ability to adapt to specific cultural ecologies. By contrasting the cultural development of science and pseudoscience along a number of dimensions , we (...) gain a better understanding of their underlying epistemic differences. Pseudoscience can achieve widespread acceptance by tapping into evolved cognitive mechanisms, thus sacrificing intellectual integrity for intuitive appeal. Science, by contrast, defies those deeply held intuitions precisely because it is institutionally arranged to track objective patterns in the world, and the world.. (shrink)

A realistic and dialectical conception of the epistemology of science is advanced according to which the acquisition of instrumental knowledge is parasitic upon the acquisition, by successive approximation, of theoretical knowledge. This conception is extended to provide an epistemological characterization of reference and of natural kinds, and it is integrated into recent naturalistic treatments of knowledge. Implications for several current issues in the philosophy of science are explored.

There is little doubt that we perceive the world as tensed—that is, as consisting of a past, present and future each with a different ontological status—and transient—that is, as involving a passage of time. We also have the ability to execute precisely timed behaviors that appear to depend upon making correct temporal judgments about which changes are truly present and which are not. A common claim made by scientists and philosophers is that our experiences of entities enduring through transient changes (...) are illusory and that our apparently accurately timed behaviors do not reflect dynamical time. We argue that our experiences of objects enduring through transient changes need not be thought of as illusory even if time is not dynamic at the fundamental level of reality. For, the dynamic properties we experience objects as having need not be fundamental properties. They could be weakly emergent from static, temporal properties. Temporal properties, on this view, are similar to ordinary properties like that of being solid, which are correctly experienced as properties of medium-sized material bodies even though they are not instantiated at the fundamental level of reality. (shrink)

Since the 1980s, several studies of visual perception have persuasively argued that important aspects of human vision are best accounted for not by recourse to inner mental representations but rather through socially observable actions and behaviors (e.g. Lynch 1985, Latour 1986, Lynch 1990, Goodwin 1994, Goodwin 1997, Sharrock & Coulter 1998). While there are clearly physiological mechanisms required for vision, psychological accounts of perception in terms of inner mental representations have been dislodged from their position as the basic term in (...) the interface between human beings and their environment and replaced with terms such as "social practice," and "vernacular intelligibility." The focus for these .. (shrink)

Technical artifacts have the capacity to fulfill their function in virtue of their physicochemical make-up. An explanation that purports to explicate this relation between artifact function and structure can be called a technological explanation. It might be argued, and Peter Kroes has in fact done so, that there issomething peculiar about technological explanations in that they are intrinsically normative in some sense. Since the notion of artifact function is a normative one (if an artifact has a proper function, it ought (...) to behave in specific ways) an explanation of an artifact’s function must inherit this normativity.In this paper I will resist this conclusion by outlining and defending a ‘buck-passing account’ of the normativity of technological explanations. I will first argue that it is important to distinguish properly between (1) a theory of function ascriptions and (2) an explanation of how a function is realized. The task of the former is to spell out the conditions under which one is justified in ascribing a function to an artifact; the latter should show how the physicochemical make-up of an artifact enables it to fulfill its function. Second, I wish to maintain that a good theory of function ascriptions should account for the normativity of these ascriptions. Provided such a function theory can be formulated — as I think it can — a technological explanation may pass the normativity buck to it. Third, to flesh out these abstract claims, I show how a particular function theory — to wit, the ICE theory by Pieter Vermaas and Wybo Houkes — can be dovetailed smoothly with my own thoughts on technological explanation. (shrink)

Weighing complex sets of evidence (i.e., from multiple disciplines and often divergent in implications) is increasingly central to properly informed decision-making. Determining “where the weight of evidence lies” is essential both for making maximal use of available evidence and figuring out what to make of such evidence. Weighing evidence in this sense requires an approach that can handle a wide range of evidential sources (completeness), that can combine the evidence with rigor, and that can do so in a way other (...) experts can assess and critique (transparency). But the democratic context in need of weight-of-evidence analysis also places additional constraints on the process, including communicability of the process to the general public, the need for an approach that can be used across a broad range of contexts (scope), and timeliness of process (practicality). I will compare qualitative and quantitative approaches with respect to both traditional epistemic criteria and criteria that arise from the democratic context, and argue that a qualitative explanatory approach can best meet the criteria and elucidate how to utilize the other approaches. This should not be surprising, as the approach I argue for is the one that most closely tracks general scientific reasoning. (shrink)

Peter Achinstein (1990, 1991) analyses the scientific debate that took place in the eighteenth and nineteenth centuries concerning the nature of light. He offers a probabilistic account of the methods employed by both particle theorists and wave theorists, and rejects any analysis of this debate in terms of coherence. He characterizes coherence through reference to William Whewell's writings concerning how "consilience of inductions" establishes an acceptable theory (Whewell, 1847) . Achinstein rejects this analysis because of its vagueness and lack of (...) reference to empirical data, concluding that coherence is insufficient to account for the belief change that took place during the wave-particle debate. (shrink)

The future of human computation (HC) benefits from examining tasks that agents already perform and designing environments to give those tasks computational significance. We call this natural human computation (NHC). We consider the possible future of NHC through the lens of Swarm!, an application under development for Google Glass. Swarm! motivates users to compute the solutions to a class of economic optimization problems by engaging the attention dynamics of crowds. We argue that anticipating and managing economies of attention provides one (...) of the most tantalizing future applications for NHC. (shrink)

Furthering the dialogue with J. Wentzel van Huyssteen over his way of reconciling Christianity and science while reflecting on human uniqueness, I offer a philosophical analysis of the phenomenon of transsexuality. The focus of my analysis is the implications of transsexuality for the metaphysics of reductive naturalism. Envisioning a pluralistic ontology of the sexed human body, I propose to account for human sexuality within the general framework of normative pragmatism. The context of my reflections is a theology of sexual diversity, (...) which I believe van Huyssteen has good reasons to endorse. (shrink)

It is often claimed that the conclusion of a deductively valid argument is contained in its premises. Popper refuted this claim when he showed that an empirical theory can be expected always to have logical consequences that transcend the current understanding of the theory. This implies that no formalisation of an empirical theory will enable the derivation of all its logical consequences. I call this result ‘Popper-incompleteness.’ This result appears to be consistent with the view of deductive reasoning as a (...) process of unfurling the content of the premises; but I suggest that the result about validity impugns this theory of reasoning. (shrink)

Based on an analysis of protective measurements, we show that the quantum state represents the physical state of a single quantum system. This result is more definite than the PBR theorem [Pusey, Barrett, and Rudolph, Nature Phys. 8, 475 (2012)].

The meaning of the wave function and its evolution are investigated. First, we argue that the wave function in quantum mechanics is a description of random discontinuous motion of particles, and the modulus square of the wave function gives the probability density of the particles being in certain locations in space. Next, we show that the linear non-relativistic evolution of the wave function of an isolated system obeys the free Schrödinger equation due to the requirements of spacetime translation invariance and (...) relativistic invariance. Thirdly, we argue that the random discontinuous motion of particles may lead to a stochastic, nonlinear collapse evolution of the wave function. A discrete model of energy-conserved wavefunction collapse is proposed and shown consistent with existing experiments and our macroscopic experience. Besides, we also give a critical analysis of the de Broglie-Bohm theory, the many-worlds interpretation and other dynamical collapse theories, and briefly discuss the issues of unifying quantum mechanics and relativity. (shrink)

In I suggested a framework for modeling the hierarchical organization of mechanisms. In this short addendum I want to highlight some connections of my approach to the statistics and machine learning literature and some of its limitations not mentioned in the paper.

In this article I tackle the question of how the hierarchical order of mechanisms can be represented within a causal graph framework. I illustrate an answer to this question proposed by Casini, Illari, Russo, and Williamson and provide an example that their formalism does not support two important features of nested mechanisms: (i) a mechanism’s submechanisms are typically causally interacting with other parts of said mechanism, and (ii) intervening in some of a mechanism’s parts should have some influence on the (...) phenomena the mechanism brings about. Finally, I sketch an alternative approach taking (i) and (ii) into account. (shrink)

In this paper I demonstrate that the causal structure of flagpole-like systems can be determined by application of causal graph theory. Additional information about the ordering of events in time or about how parameters of the systems of interest can be manipulated is not needed.

Modeling mechanisms is central to the biological sciences – for purposes of explanation, prediction, extrapolation, and manipulation. A closer look at the philosophical literature reveals that mechanisms are predominantly modeled in a purely qualitative way. That is, mechanistic models are conceived of as representing how certain entities and activities are spatially and temporally organized so that they bring about the behavior of the mechanism in question. Although this adequately characterizes how mechanisms are represented in biology textbooks, contemporary biological research practice (...) shows the need for quantitative, probabilistic models of mechanisms, too. In this paper we argue that the formal framework of causal graph theory is well-suited to provide us with models of biological mechanisms that incorporate quantitative and probabilistic information. On the ba-sis of an example from contemporary biological practice, namely feedback regulation of fatty acid biosynthesis in Brassica napus, we show that causal graph theoretical models can account for feedback as well as for the multi-level character of mechanisms. However, we do not claim that causal graph theoretical representations of mechanisms are advantageous in all respects and should replace common qualitative models. Rather, we endorse the more balanced view that causal graph theoretical models of mechanisms are useful for some purposes, while being insufficient for others. (shrink)

In this paper we show that the application of Occam’s razor to the theory of causal Bayes nets gives us a neat definition of direct causation. In particular we show that Occam’s razor implies Woodward’s (2003) definition of direct causation, provided suitable intervention variables exist and the causal Markov condition (CMC) is satisfied. We also show how Occam’s razor can account for direct causal relationships Woodward style when only stochastic intervention variables are available.