Animal models have received particular attention as key examples of material models. In this paper, we argue that the specificities of establishing animal models—acknowledging their status as living beings and as epistemological tools—necessitate a more complex account of animal models as materialised models. This becomes particularly evident in animal-based models of diseases that only occur in humans: in these cases, the representational relation between animal model and human patient needs to be generated and validated. The first part of this paper (...) presents an account of how disease-specific animal models are established by drawing on the example of transgenic mice models for Alzheimer’s disease. We will introduce an account of validation that involves a three-fold process including from human being to experimental organism; from experimental organism to animal model; and from animal model to human patient. This process draws upon clinical relevance as much as scientific practices and results in disease-specific, yet incomplete, animal models. The second part of this paper argues that the incompleteness of models can be described in terms of multi-level abstractions. We qualify this notion by pointing to different experimental techniques and targets of modelling, which give rise to a plurality of models for a specific disease. (shrink)

Taking the visual appeal of the ‘bell curve’ as an example, this paper discusses in how far the availability of quantitative approaches (here: statistics) that comes along with representational standards immediately affects qualitative concepts of scientific reasoning (here: normality). Within the realm of this paper I shall focus on the relationship between normality, as defined by scientific enterprise, and normativity, that result out of the very processes of standardisation itself. Two hypotheses are guiding this analysis: (1) normality, as it is (...) defined by the natural and the life sciences, must be regarded as an ontological, but epistemological important fiction and (2) standardised, canonical visualisations (such as the ‘bell curve’) impact on scientific thinking and reasoning to a significant degree. I restrict my analysis to the epistemological function of scientific representations of data: This means identifying key strategies of producing graphs and images in scientific practice. As a starting point, it is crucial to evaluate to what degree graphs and images could be seen as guiding scientific reasoning itself, for instance in attributing to them a certain epistemological function within a given field of research. (shrink)

Given that visualisations via medical imaging have tremendously increased over the last decades, the overall presence of colour-coded brain slices generated on the basis of functional imaging, i.e. neuroimaging techniques, have led to the assumption of so-called kinds of brains or cognitive profiles that might be especially related to non-healthy humans affected by neurological, neuropsychological or psychiatric syndromes or disorders. In clinical contexts especially, one must consider that visualisations through medical imaging are suggestive in a twofold way. Imaging data not (...) only visually render pathological entities, but also tend to represent objective and concrete evidence for these psychophysical states in question. This article aims to identify key issues in visually rendering psychiatric disorders via functional approaches of imaging within the neurosciences from an epistemological point of view. (shrink)