Monday, 28 April 2014

“When you have eliminated the impossible, whatever remains, however improbable, must be the truth.” Thus Arthur Conan Doyle has Sherlock Holmes describe a crucial part of his method of solving detective cases. Sherlock Holmes often takes pride in adhering to principles of scientific reasoning. Whether or not this particular element of his analysis can be called scientific is not straightforward to decide, however. Do scientists use ‘no alternatives arguments’ of the kind described above? Is it justified to infer a theory’s truth from the observation that no other acceptable theory is known? Can this be done even when empirical confirmation of the theory in question is sketchy or entirely absent?

Thursday, 24 April 2014

In recent years, formal and empirical approaches have become central to study group decision-making. Social network analysis, agent-based modeling and simulation techniques are meanwhile widely used not only in sociology, political science, social psychology, and economics. Also philosophers increasingly point to the potentials of these approaches for addressing questions in political and moral philosophy, formal epistemology, and philosophy of science more generally. Whereas the importance of social dynamics and network structures for investigating into decision-making has been largely acknowledged, the application and results of these novel approaches raise a number of philosophical issues that have not yet been discussed in the literature. This workshop will bring together social scientists, philosophers, decision theorists, and psychologists to explore and discuss the potentials and limitations of these approaches for scientific practice and philosophy alike.

We invite submissions of a title, a short abstractof100 words and an extended abstractof 1000 words by 25 May, 2014. We anticipate that there will be space for about four contributed talks.

Friday, 11 April 2014

LMU Munich11-13 December 2014www.lmu.de/abmp2014In the past two decades, agent-based models (ABMs) have become ubiquitous in philosophy and various sciences. ABMs have been applied, for example, to study the evolution of norms and language, to understand migration patterns of past civilizations, to investigate how population levels change in ecosystems over time, and more. In contrast with classical economic models or population-level models in biology, ABMs are praised for their lack of assumptions and their flexibility. Nonetheless, many of the methodological and epistemological questions raised by ABMs have yet to be fully articulated and answered. For example, there are unresolved debates about how to test (or "validate") ABMs, about the scope of their applicability in philosophy and the sciences, and about their implications or our understanding of reduction, emergence, and complexity in the sciences. This conference aims to bring together an interdisciplinary group of researchers aimed at understanding the foundations of agent-based modeling and how the practice can inform and be informed by philosophy.Topics of the conference will include, but will not be limited to:

Advantages and disadvantages of agent-based models in relation to classical economic and biological models

Testing and/or "validating" agent-based models

How agent-based models inform discussions of reduction and/or emergence in the sciences

Agent-based models and complexity

Applications of ABMs in philosophy, which may include, but is not limited to, investigating the evolution of norms and/or language, or the study of dynamics of scientific communities and theory/paradigm change

Thursday, 10 April 2014

With permission, I'm posting some of David Chalmers' quick thoughts/responses to Panu Raatikainen's critical notice of David's recent aufbauesque (2012) book, Constructing the World (some lectures on this are here on youtube):
---------------------

(1) Are bridge laws allowed in the scrutability base, and if so does this trivialize scrutability theses?

Bridge laws are certainly not disallowed from the base in general (indeed, I'd have psychophysical bridge laws in my own base). When I said that bridge laws were not allowed in the base, I was discussing a specific scrutability thesis: microphysical scrutability (where the base must be microphysical truths alone). On the other hand, building in separate bridge laws for water, kangaroos, and everything else will lead to a non-compact scrutability base. So there's no trivialization of the central compact scrutability thesis here.

My discussion of the $\omega$ rule is intended to illustrate my response to the godelian objection to the scrutability of mathematical truths, rather than a general account of the knowability of mathematical truths. It's an example of an idealized infinitary process that can get around godelian limitations. The $\omega$-rule suffices to settle first-order arithmetical truths but of course other infinitary methods will be needed in other domains. It's just false that inference rules assume the knowability of their premises, so there's no trivialization here.

(3) Is there a circularity in nomic truths being scrutable from microphysical truths and vice versa?

If one distinguishes ramsified and nonramsified versions of microphysical truths, any apparent circularity disappears. non-ramsified microphysical truths are scrutable from ramsified causal/nomic truths, which are scrutable from ramsified microphysical truths (including microphysical laws).

The "contemporaryNewmanproblem" isn't a problem for my thesis, as my ramsification base isn't an observational base. As for Scheffler's problem: my first reaction (though this really is quick) is that Scheffler's example involves either ramsifying a trivial theory or giving an incomplete regimentation (and then ramsification) of a nontrivial theory. If those material conditionals really constitute the whole content of the theory (and the theory gives the whole content of the relevant theoretical term), then it's trivial in the way suggested. If the theory is formulated more completely e.g. with nomic or causal conditionals, the objection won't arise. Certainly the problem won't arise for the Ramsey sentences that my procedure yields.

(5) Why think special science truths are scrutable?

The arguments for scrutability of special science truths are in Chapters 3 and 4 (supplemented by 6), which are not discussed in the critical notice. The excursus on the unity of science is not intended as a primary argument for scrutability of special science truths. Rather, it is connecting the scrutability thesis to the unity/reduction literature and making the case that the thesis is a weak sort of unity/reduction thesis that survives common objections to unity or reduction theses.

The Congress of Logic, Methodology and Philosophy of Science (CLMPS) is organized every four years by the Division of Logic, Methodology and Philosophy of Science (DLMPS). The Philosophical Society of Finland, the Academy of Finland Centre of Excellence in the Philosophy the SocialSciences (TINT), the Division of Theoretical Philosophy (Department of Philosophy, History, Culture and Art Studies) are proud to host the 15thCongress of Logic, Methodology and Philosophy of Science (CLMPS 2015).CLMPS 2015 is supported by University of Helsinki and the Federation of Finnish Learned Societies.

CLMPS 2015 is co-located with the European Summer Meeting of the Association for Symbolic Logic, Logic Colloquium 2015 (the abstract submission for Logic Colloquium 2015 opens in early 2015).

The congress will host six plenary lectures and several invited lectures.The names of the plenary lecture speakers and invited speakers will beannounced soon.

B. General Philosophy of ScienceB1. MethodologyB2. Formal Philosophy of Science and Formal EpistemologyB3. Metaphysical Issues in the Philosophy of ScienceB4. Ethical and Political Issues in the Philosophy of ScienceB5. Historical Aspects in the Philosophy of Science

C. Philosophical Issues of Particular DisciplinesC1. Philosophy of the Formal Sciences (incl. Logic, Mathematics, Statistics, Computer Science)C2. Philosophy of the Physical Sciences (incl. Physics, Chemistry, Earth Science, Climate Science)C3. Philosophy of the Life SciencesC4. Philosophy of the Cognitive and Behavioural SciencesC5. Philosophy of the Humanities and the Social SciencesC6. Philosophy of the Applied Sciences and TechnologyC7. Philosophy of MedicineC8. Metaphilosophy

In addition, some submitted abstracts will be invited to contribute to the International Union of History and Philosophy of Science (IUHPS) JointCommission Symposium Sessions if the programme committee considers the abstracts well suited for IUHPS themes.

The abstract should include: (i) a general description of the format and the topic of the proposed symposium and its significance (up to 500 words); (ii) a 300-word abstract of each paper (3-4 papers)

Each accepted contributed symposia will be allocated a full two-hour session.

AFFILIATED MEETINGS: Affiliated meetings are half-day to full day symposiathat run parallel to the CLMPS 2015 programme, and belong to the congressprogramme. Please consult the CLMPS 2015 submission guidelines for furtherinformation.

RULES FOR MULTIPLE PRESENTATIONS

+ Maximally one contributed individual paper+ One is allowed to present a second paper of which one is a co-author, but then the main author of this paper must submit the paper and be registeredas a participant.+ If one participates in a contributed symposia proposal, affiliatedmeeting proposal or is an invited speaker, one is not allowed to submit an individual contributed paper in which one is the main author (it ispossible to be a co-author of a contributed paper, but then the main authorof this paper must submit the paper and be registered as a participant).

Monday, 7 April 2014

This is the third in a series of three posts in which I rehearse what I hope to say at the Author Meets Critics session for Lara Buchak's tremendous* new book Risk and Rationality at the Pacific APA in a couple of weeks. The previous two posts are here and here. In the first post, I gave an overview of risk-weighted expected utility theory, Buchak's alternative to expected utility theory. In the second post, I gave a prima facie reason for worrying about any departure from expected utility theory: if an agent violates expected utility theory (perhaps whilst exhibiting the sort of risk-sensitivity that Buchak's theory permits), then her preferences amongst the acts don't line up with her estimates of the value of those acts. In this post, I want to consider a way of reconciling the preferences Buchak permits with the normative claims of expected utility theory.

I will be making a standard move. I will be redescribing the space of outcomes in such a way that we can understand any Buchakian agent as setting her preferences in line with her expectation (and thus estimate) of the value of that act.

Thursday, 3 April 2014

In recent formal epistemology, a lot of attention has been paid to a programme that one might call accuracy-first epistemology. It is based on a particular account of the goodness of doxastic states: on this account, a doxastic state -- be it a full belief, a partial belief, or a comparative probability ordering -- is better the greater its accuracy; Alvin Goldman calls this account veritism. This informal idea is often then made mathematically precise and the resulting formal account of doxastic goodness is used to draw various epistemological conclusions.

In this post, the doxastic states with which I will be concerned are credences or partial beliefs. Such a doxastic state is represented by a single credence function $c$, which assigns a real number $0 \leq c(X) \leq 1$ to each proposition $X$ about which the agent has an opinion. Thus, a measure of accuracy is a function $A$ that takes a credence function $c$ and a possible world $w$ and returns a number $A(c, w)$ that measures the accuracy of $c$ at $w$: $A(c, w)$ takes values in $[-\infty, 0]$.

Beginning with Joyce 1998, a number of philosophers have given different characterisations of the legitimate measures of accuracy: Leitgeb and Pettigrew 2010; Joyce 2009; and D'Agostino and Sinigaglia 2009. Leitgeb and Pettigrew give a very narrow characterisation, as do D'Agostino and Sinigaglia: they agree that the so-called Brier score (or some strictly increasing transformation of it) is the only legitimate measure of accuracy. Joyce, on the other hand, gives a much broader characterisation. I find none of these characterisations adequate, though I won't enumerate my concerns here. Rather, in this post, I'd like to offer a new characterisation.