In this paper it is argued that qualitative theories (Q-theories) can be used to describe the statistical structure of cross classified populations and that the notion of verisimilitude provides an appropriate tool for measuring the statistical adequacy of Q-theories. First of all, a short outline of the post-Popperian approaches to verisimilitude and of the related verisimilitudinarian non-falsificationist methodologies (VNF-methodologies) is given. Secondly, the notion of Q-theory is explicated, and the qualitative verisimilitude of Q-theories is defined. Afterwards, appropriate measures for the (...) statistical verisimilitude of Q-theories are introduced, so to obtain a clear formulation of the intuitive idea that the statistical truth about cross classified populations can be approached by falsified Q-theories. Finally, it is argued that some basic intuitions underlying VNF-methodologies are shared by the so-called prediction logic, developed by the statisticians and social scientists David K. Hildebrand, James D. Laing and Howard Rosenthal. (shrink)

Perhaps because both explanation and prediction are key components to understanding, philosophers and psychologists often portray these two abilities as though they arise from the same competence, and sometimes they are taken to be the same competence. When explanation and prediction are associated in this way, they are taken to be two expressions of a single cognitive capacity that differ from one another only pragmatically. If the difference between prediction and explanation of human behavior is merely pragmatic, (...) then anytime I predict someone’s future behavior, I would at that moment also have an explanation of the behavior. I argue that advocates of both the theory theory and the simulation theory accept the symmetry of psychological prediction and explanation. However, there is very good reason to believe that this hypothesis is false. Just as we can predict the occurrence of some physical phenomena that we have no explanation for, we are also able to make accurate predictions of intentional behavior without having an explanation. Rather than requiring mental state attribution, I argue that the prediction of human behavior is most often accomplished by statistical induction rather than through an appeal to mental states. However, explanations are not given in these terms. (shrink)

Research has shown that the brain is constantly making predictions about future events. Theories of prediction in perception, action and learning suggest that the brain serves to reduce the discrepancies between expectation and actual experience, i.e. by reducing the prediction error. Forward models of action and perception propose the generation of a predictive internal representation of the expected sensory outcome, which is matched to the actual sensory feedback. Shared neural representations have been found when experiencing one’s own and (...) observing others’ actions, rewards, errors and emotions such as fear and pain. These general principles of the ‘predictive brain’ are well established and have already begun to be applied to social aspects of cognition. The application and relevance of these predictive principles to social cognition are discussed. Evidence is presented to argue that simple non-social cognitive processes can be extended to explain complex cognitive processes required for social interaction, with common neural activity seen for both social and non-social cognitions. A number of studies are included which demonstrate that bottom-up sensory input and top-down expectancies can be modulated by social information. The concept of competing social forward models and a partially distinct category of social prediction errors are introduced. The evolutionary implications of a ‘social predictive brain’ are also mentioned, along with the implications on psychopathology. The review presents a number of testable hypotheses and novel comparisons that aim to stimulate further discussion and integration between currently disparate fields of research, with regard to computational models, behavioural and neurophysiological data. This promotes a relatively new platform for inquiry in social neuroscience with implications in social learning, theory of mind, empathy, the evolution of the social brain and potential strategies for treating social cognitive deficits. (shrink)

This paper considers the question of whether predictions of wrongdoing are relevant to our moral obligations. After giving an analysis of ‘won’t’ claims (i.e., claims that an agent won’t Φ), the question is separated into two different issues: firstly, whether predictions of wrongdoing affect our objective moral obligations, and secondly, whether self-prediction of wrongdoing can be legitimately used in moral deliberation. I argue for an affirmative answer to both questions, although there are conditions that must be met for self- (...) class='Hi'>prediction to be appropriate in deliberation. The discussion illuminates an interesting and significant tension between agency and prediction. (shrink)

An original study of the philosophical problems associated with inductive reasoning. Like most of the main questions in epistemology, the classical problem of induction arises from doubts about a mode of inference used to justify some of our most familiar and pervasive beliefs. The experience of each individual is limited and fragmentary, yet the scope of our beliefs is much wider; and it is the relation between belief and experience, in particular the belief that the future will in some respects (...) resemble the past and the unobserved the observed, which forms the subject of this book. Dr Blackburn's first aim is to state the problem of induction properly, to show that there does exist a genuine problem immune to the solutions in vogue at present, yet no tin principle insoluble. He gives an extended and original account of the concept of a reason and goes on to discuss prediction. In the end Dr Blackburn produces a rationale for belief in certain short-term predictions based on his reinterpretation of the classical principle of indifference. He claims that a justification for induction can be found along the lines he has suggested and must indeed be found there if anywhere. (shrink)

Here we briefly review the concept of "prediction" within the context of classical relativity theory. We prove a theorem asserting that one may predict one's own future only in a closed universe. We then question whether prediction is possible at all (even in closed universes). We note that interest in prediction has stemmed from considering the epistemological predicament of the observer. We argue that the definitions of prediction found thus far in the literature do not fully (...) appreciate this predicament. We propose a more adequate alternative and show that, under this definition, prediction is essentially impossible in general relativity. (shrink)

Ever since the first meeting of the proponents of the emerging Logical Empiricism in 1923, there existed philosophical differences as well as personal rivalries between the groups in Berlin and Vienna, headed by Hans Reichenbach and Moritz Schlick, respectively. Early theoretical tensions between Schlick and Reichenbach were caused by Reichenbach's (neo) Kantian roots (esp. his version of the relativized a priori), who himself regarded the Vienna Circle as a sort of anti-realist "positivist school"—as he described it in his Experience and (...)Prediction (1938). One result of this divergence was Schlick's preference of Carnap over Reichenbach for a position at the University of Vienna (in 1926), and his decision not to serve as a co-editor with Reichenbach for the journal Erkenntnis that they jointly established in 1930 (which was then co-edited by Carnap and Reichenbach from 1930 to 1938). A second split rooted in different views on induction and probability, which culminated in the Hans Reichenbach's refusal to serve as an invited author on probability within the International Encyclopedia of Unified Science series ed. by Rudolf Carnap, Charles Morris and Otto Neurath from 1938 onwards. In this regard it is remarkable that also Richard von Mises, who was the second leading figure of Logical Empiricism in Turkish exile, criticized the theory of probability put forward by his former Berlin colleague. In this paper I analyse this controversial exchange, drawing on the relevant correspondence and asking whether these (meta) philosophical differences were a typical feature of the pluralism inherent in Logical Empiricism in general. (shrink)

An appreciation of the many roles of ‘precision-weighting’ (upping the gain on select populations of prediction error units) opens the door to better accounts of planning and ‘offline simulation’, makes suggestive contact with large bodies of work on embodied and situated cognition, and offers new perspectives on the ‘active brain’. Combined with the complex affordances of language and culture, and operating against the essential backdrop of a variety of more biologically basic ploys and stratagems, the result is a maximally (...) context-sensitive, restless, constantly self-reconfiguring architecture. (shrink)

Recent theory suggests that action prediction relies of a motor emulation mechanism that works by mapping observed actions onto the observer action system so that predictions can be generated using that same predictive mechanisms that underlie action control. This suggests that action prediction may be more accurate when there is a more direct mapping between the stimulus and the observer. We tested this hypothesis by comparing prediction accuracy for two stimulus types. A mannequin stimulus which contained information (...) about the effectors used to produce the action and a point stimulus, which contained identical dynamic information but no effector information. Prediction was more accurate for the mannequin stimulus. However, this effect was dependent on the observer having previous experience performing the observed action. This suggests that experienced and na¨ıve observers might generate predictions in qualitatively difference ways, which may relate to the presence of an internal representation of the action laid down through action performance. (shrink)

We apply a cognitive modeling approach to the problem of measuring expertise on rank ordering problems. In these problems, people must order a set of items in terms of a given criterion (e.g., ordering American holidays through the calendar year). Using a cognitive model of behavior on this problem that allows for individual differences in knowledge, we are able to infer people's expertise directly from the rankings they provide. We show that our model-based measure of expertise outperforms self-report measures, taken (...) both before and after completing the ordering of items, in terms of correlation with the actual accuracy of the answers. These results apply to six general knowledge tasks, like ordering American holidays, and two prediction tasks, involving sporting and television competitions. Based on these results, we discuss the potential and limitations of using cognitive models in assessing expertise. (shrink)

This essay proposes to extend the model of apocalyptic argument developedin my recent book Arguing the Apocalypse (OâLeary, 1994) beyond the study ofreligious discourse, by applying this model to the debate over awell-publicized earthquake prediction that caused a widespread panic in theAmerican midwest in December, 1990. The first section of the essay willsummarize the essential elements of apocalyptic argument as I have earlierdefined them; the second section will apply the model to the case of the NewMadrid, Missouri, earthquake (...) class='Hi'>prediction, in order to demonstrate thatcertain patterns of reasoning characteristic of religious apocalyptic arepresent in the discourse over an anticipated local disaster. My ultimatepurpose is to show that predictions of global and local catastrophe mayserve as extreme cases that will illuminate the dynamics of predictiveargument in general. Thus my argument will seek to undercut Daniel Bellâsdistinction between prophecy and prediction (Bell, 1973) by establishingthat these discourses share identifiable formal and substantivecharacteristics, and depend for their rhetorical effect on anxiety, hope,far, and excitement as modes of temporal anticipation. (shrink)

Accurately predicting other people's actions may involve two processes: internal real-time simulation (dynamic updating) and matching recently perceived action images (static matching). Using a priming of body parts, this study aimed to differentiate the two processes. Specifically, participants played a motion-controlled video game with either their arms or legs. They then observed arm movements of a point-light actor, which were briefly occluded from view, followed by a static test pose. Participants judged whether this test pose depicted a coherent continuation of (...) the previously seen action (i.e., “action prediction task”). Evidence of dynamic updating was obtained after compatible effector priming (i.e., arms), whereas incompatible effector priming (i.e., legs) indicated static matching. Together, the results support action prediction as engaging two distinct processes, dynamic simulation and static matching, and indicate that their relative contributions depend on contextual factors like compatibility of body parts involved in performed and observed action. (shrink)

The term “predictive brain” depicts one of the most relevant concepts in cognitive neuroscience which emphasizes the importance of “looking into the future”, namely prediction, preparation, anticipation, prospection or expectations in various cognitive domains. Analogously, it has been suggested that predictive processing represents one of the fundamental principles of neural computations and that errors of prediction may be crucial for driving neural and cognitive processes as well as behavior. This review discusses research areas which have recognized the importance (...) of prediction and introduces the relevant terminology and leading theories in the field in an attempt to abstract some generative mechanisms of predictive processing. Furthermore, we discuss the process of testing the validity of postulated expectations by matching these to the realized events and compare the subsequent processing of events which confirm and those which violate the initial predictions. We conclude by suggesting that, although a lot is known about this type of processing, there are still many open issues which need to be resolved before a unified theory of predictive processing can be postulated with regard to both cognitive and neural functioning. (shrink)

A growing body of evidence in cognitive psychology and neuroscience suggests a deep interconnection between sensory-motor and language systems in the brain. Based on recent neurophysiological findings on the anatomo-functional organization of the fronto-parietal network, we present a computational model showing that language processing may have reused or co-developed organizing principles, functionality, and learning mechanisms typical of premotor circuit. The proposed model combines principles of Hebbian topological self-organization and prediction learning. Trained on sequences of either motor or linguistic units, (...) the network develops independent neuronal chains, formed by dedicated nodes encoding only context-specific stimuli. Moreover, neurons responding to the same stimulus or class of stimuli tend to cluster together to form topologically connected areas similar to those observed in the brain cortex. Simulations support a unitary explanatory framework reconciling neurophysiological motor data with established behavioral evidence on lexical acquisition, access, and recall. (shrink)

Prediction errors are a central notion in theoretical models of reinforcement learning, perceptual inference, decision-making and cognition, and prediction error signals have been reported across a wide range of brain regions and experimental paradigms. Here, we will make an attempt to see the forest for the trees, considering the commonalities and differences of reported prediction errors signals in light of recent suggestions that the computation of prediction errors forms a fundamental mode of brain function. We discuss (...) where different types of prediction errors are encoded, how they are generated, and the different functional roles they fulfil. We suggest that while encoding of prediction errors is a common computation across brain regions, the content and function of these error signals can be very different, and are determined by the afferent and efferent connections within the neural circuitry in which they arise. (shrink)

I discuss a stochastic model of language learning and change. During a syntactic change, each speaker makes use of constructions from two different idealized grammars at variable rates. The model incorporates regularization in that speakers have a slight preference for using the dominant idealized grammar. It also includes incrementation: The population is divided into two interacting generations. Children can detect correlations between age and speech. They then predict where the population’s language is moving and speak according to that prediction, (...) which represents a social force encouraging children not to sound out-dated. Both regularization and incrementation turn out to be necessary for spontaneous language change to occur on a reasonable time scale and run to completion monotonically. Chance correlation between age and speech may be amplified by these social forces, eventually leading to a syntactic change through prediction-driven instability. (shrink)

Humans rely on multiple sensory modalities to determine the emotional state of others. In fact, such multisensory perception may be one of the mechanisms explaining the ease and efficiency by which others’ emotions are recognized. But how and when exactly do the different modalities interact? One aspect in multisensory perception that has received increasing interest in recent years is the concept of crossmodal prediction. In emotion perception, as in most other settings, visual information precedes the auditory one. Thereby, leading (...) in visual information can facilitate subsequent auditory processing. While this mechanism has often been described in audiovisual speech perception, it has not been addressed so far in audiovisual emotion perception. Based on the current state of the art in (a) crossmodal prediction and (b) multisensory emotion perception research, we propose that it is essential to consider the former in order to fully understand the latter. Focusing on electroencephalographic (EEG) and magnetoencephalographic (MEG) studies, we provide a brief overview of the current research in both fields. In discussing these findings, we suggest that emotional visual information may allow for a more reliable prediction of auditory information compared to non-emotional visual information. In support of this hypothesis, we present a re-analysis of a previous data set that shows an inverse correlation between the N1 response in the EEG and the duration of visual emotional but not non-emotional information. If the assumption that emotional content allows for more reliable predictions can be corroborated in future studies, crossmodal prediction is a crucial factor in our understanding of multisensory emotion perception. (shrink)

Predicting the actions of other individuals is crucial for our daily interactions. Recent evidence suggests that the prediction of object-directed arm and full-body actions employs the PMd. Thus, the neural substrate involved in action control may also be essential for action prediction. Here, we aimed to address this issue and hypothesised that disrupting the PMd impairs action prediction. Using fMRI-guided coil navigation, rTMS (5 pulses, 10 Hz) was applied over the left PMd and over the vertex (control (...) region) while participants observed everyday actions in video clips that were transiently occluded for one second. The participants detected manipulations in the time course of occluded actions, which required them to internally predict the actions during occlusion. To differentiate between functional roles that the PMd could play in prediction, rTMS was either delivered at occluder onset (TMS-early), affecting the initiation of action prediction, or 300 ms later during occlusion (TMS-late), affecting the maintenance of an ongoing prediction. TMS-early over the left PMd produced more prediction errors than TMS-early over the vertex. TMS-late had no effect on prediction performance, suggesting that the left PMd might be involved particularly during the initiation of internally guided action prediction but may play a subordinate role in maintaining ongoing prediction. These findings open a new perspective on the role of the left PMd in action prediction which is in line with its functions in action control and in cognitive tasks. In the discussion, the relevance of the left PMd for integrating external action parameters with the observer’s motor repertoire is emphasised. Overall, the results are in line with the notion that premotor functions are employed in both action control and action observation. (shrink)

While performing an action, the timing of when the sensory feedback is given can be used to establish the causal link between the action and its consequence. It has been shown that delaying the visual feedback while carrying an object makes people feel the mass of the object to be greater, suggesting that the feedback timing can also impact the perceived quality of an external object. In this study, we investigated the origin of the feedback timing information that influences the (...) mass perception of the external object. Participants made a straight reaching movement while holding a manipulandum. The movement of the manipulandum was presented as a cursor movement on a monitor. In Experiment 1, various delays were imposed between the actual trajectory and the cursor movement. The participants’ perceived mass of the manipulandum significantly increased as the delay increased to 400 ms, but this gain did not reach significance when the delay was 800 ms. This suggests the existence of a temporal tuning mechanism for incorporating the visual feedback into the perception of mass. In Experiment 2, we examined whether the increased mass perception during the visual delay was due to the prediction error of the visual consequence of an action or to the actual delay of the feedback itself. After the participants adapted to the feedback delay, the perceived mass of the object became lighter than before, indicating that updating the temporal prediction model for the visual consequence diminishes the overestimation of the object’s mass. We propose that the misattribution of the visual delay into mass perception is induced by the sensorimotor prediction error, possibly when the amount of delay (error) is within the range that can reasonably include the consequence of an action. (shrink)

The human reward system is sensitive to both social (e.g., validation) and non-social rewards (e.g., money) and is likely integral for relationship development and reputation building. However, data is sparse on the question of whether implicit social reward processing meaningfully contributes to explicit social representations such as trust and attachment security in pre-existing relationships. This event-related fMRI experiment examined reward system prediction-error activity in response to a potent social reward—social validation—and this activity’s relation to both attachment security and trust (...) in the context of real romantic relationships. During the experiment, participants’ expectations for their romantic partners’ positive regard of them were confirmed (validated) or violated, in either positive or negative directions. Primary analyses were conducted using predefined regions of interest, the locations of which were taken from previously published research. Results indicate that activity for mid-brain and striatal reward system regions of interest was modulated by social reward expectation violation in ways consistent with prior research on reward prediction-error. Additionally, activity in the striatum during viewing of disconfirmatory information was associated with both increases in post-scan reports of attachment anxiety and decreases in post-scan trust, a finding that follows directly from representational models of attachment and trust. (shrink)

Focal hand dystonia in musicians is a movement disorder affecting highly trained movements. Rather than being a pure motor disorder related to movement execution only, movement planning, error prediction and sensorimotor integration are also impaired. Internal models, of which two types, forward and inverse models have been described and most likely processed in the cerebellum, are known to be involved in these tasks. Recent results indicate that the cerebellum may be involved in the pathophysiology of focal dystonia. Thus the (...) aim of our study was to investigate whether an internal model deficit plays a role in focal dystonia. We focused on the forward model, which predicts sensory consequences of motor commands and allows the discrimination between external sensory input and input deriving from motor action. We investigated 19 patients, aged 19-59 and 19 healthy musicians aged 19-36 as controls. Tactile stimuli were applied to fingers II–V of both hands by the experimenter or the patient. After each stimulus the participant rated the stimulus-intensity on a scale between 0 (no sensation) and 1 (maximal intensity). The difference of perceived intensity between self- & externally applied stimuli was then calculated for each finger. For assessing differences between patients and controls we performed a cluster analysis of the affected hand and the corresponding hand of the controls using the fingers II–V as variables in a 4-dimensional hyperspace (chance level=0.5). Using a cluster analysis, we found a correct classification of the affected finger in 78,9%-94.7%. There was no difference between patients and healthy controls of the absolute value of the perceived stimulus intensity. Our results suggest an altered forward model function in focal hand dystonia. It has the potential of suggesting a neural correlate within the cerebellum and of helping integrate findings with regard to altered sensorimotor processing and altered prediction in FD in a single framework. (shrink)

During conversation listeners have to perform several tasks simultaneously. They have to comprehend their interlocutor’s turn, while also having to prepare their own next turn. Moreover, a careful analysis of the timing of natural conversation reveals that next speakers also time their turns very precisely. This is possible only if listeners can predict accurately when the speaker’s turn is going to end. But how are people able to predict when a turn ends? We propose that people know when a turn (...) ends, because they know how it ends. We conducted a gating study to examine if better turn-end predictions coincide with more accurate anticipation of the last words of a turn. We used turns from an earlier button-press experiment where people had to press a button exactly when a turn ended. We show that the proportion of correct guesses in our experiment is higher when a turn's end was estimated better in time in the button press experiment. When people were too late in their anticipation in the button press experiment, they also anticipated more words in our gating study. We conclude that people made predictions in advance about the upcoming content of a turn and use this prediction to estimate the duration of the turn. We suggest an economical model of turn-end anticipation that is based on anticipation of words and syntactic frames in comprehension. (shrink)

The present study compared the accuracy of cue-outcome knowledge gained during prediction-based and control-based learning in stable and unstable dynamic environments. Participants either learnt to make cue interventions in order to control an outcome, or learnt to predict the outcome from observing changes to the cue values. Study 1 (N = 60) revealed that in tests of control, after a short period of familiarization, performance of Predictors was equivalent to Controllers. Study 2 (N = 28) showed that Controllers showed (...) equivalent task knowledge when to compared to Predictors. Though both Controllers and Predictors showed good performance at test, overall Controllers showed an advantage. The cue-outcome knowledge acquired during learning was sufficiently flexible to enable successful transfer to tests of control and prediction. (shrink)

The human reward system is sensitive to both social (e.g., validation) and non-social rewards (e.g., money) and is likely integral for relationship development and reputation building. However, data is sparse on the question of whether implicit social reward processing meaningfully contributes to explicit social representations such as trust and attachment security in pre-existing relationships. This event-related fMRI experiment examined reward system prediction-error activity in response to a potent social reward—social validation—and this activity’s relation to both attachment security and trust (...) in the context of real romantic relationships. During the experiment, participants’ expectations for their romantic partners’ positive regard of them were confirmed (validated) or violated, in either positive or negative directions. Primary analyses were conducted using predefined regions of interest, the locations of which were taken from previously published research. Results indicate that activity for mid-brain and striatal reward system regions of interest was modulated by social reward expectation violation in ways consistent with prior research on reward prediction-error. Additionally, activity in the striatum during viewing of disconfirmatory information was associated with both increases in post-scan reports of attachment anxiety and decreases in post-scan trust, a finding that follows directly from representational models of attachment and trust. (shrink)

Frank Knight (1921) famously distinguished the epistemic modes of certainty, risk, and uncertainty in order to characterize situations where deterministic, probabilistic or possibilistic foreknowledge is available. Because our probabilistic knowledge is limited, i.e. because many systems, e.g. the global climate, cannot be described and predicted probabilistically in a reliable way, Knight's third category, possibilistic foreknowledge, is not simply swept by the probabilistic mode. This raises the question how to justify possibilistic predictionsincluding the identication of the worst case. The development of (...) such a modal methodology is particularly vital with respect to predictions of climate change. I show that a methodological dilemma emerges when possibilistic predictions are framed in traditional terms and argue that a more nuanced conceptual framework, distinguishing dierent types of possibility, should be used in order to convey our uncertain knowledge about the future. The new conceptual scheme, however, questions the applicability of standard rules of rational decision-making, thus generating new challenges. (shrink)

Gregor Betz explores the following questions: Where are the limits of economics, in particular the limits of economic foreknowledge? Are macroeconomic forecasts credible predictions or mere prophecies and what would this imply for the way economic policy decisions are taken? Is rational economic decision making possible without forecasting at all?

No matter one’s wealth or social position, all are subject to the threats of natural hazards. Be it fire, flood, hurricane, earthquake, tornado, or drought, the reality of hazard risk is universal. In response, governments, non-profits, and the private sector all support research to study hazards. Each has a common end in mind: to increase the resilience of vulnerable communities. While this end goal is shared across hazards, the conception of how to get there can diverge considerably. The earthquake and (...) hurricane research endeavors in the US provide an illustrative contrast. The earthquake community sets out to increase resilience through a research process that simultaneously promotes both high quality and usable – preparedness-focused - science. In order to do so, the logic suggests that research must be collaborative, responsive, and transparent. Hurricane research, by contrast, largely promotes high quality science – predictions - alone, and presumes that usability should flow from there. This process is not collaborative, responsive, or transparent. Experience suggests, however, that the latter model – hurricane research - does not prepare communities or decision makers to use the high quality science it has produced when a storm does hit. The predictions are good, but they are not used effectively. Earthquake research, on the other hand, is developed through a collaborative process that equips decision makers to know and use hazards research knowledge as soon as an earthquake hits. The contrast between the two fields suggests that earthquake research is more likely to meet the end goal of resilience than is hurricane research, and thus that communities might be more resilient to hurricanes were the model by which research is funded and conducted to change. The earthquake research experience can provide lessons for this shift. This paper employs the Public Value Mapping (PVM) framework to explore these two divergent public value logics, their end results, and opportunities for improvement. (shrink)

Scientists’ responsibility to inform the public about their results may conflict with their responsibility not to cause social disturbance by the communication of these results. A study of the well-known Brady-Spence and Iben Browning earthquake predictions illustrates this conflict in the publication of scientifically unwarranted predictions. Furthermore, a public policy that considers public sensitivity caused by such publications as an opportunity to promote public awareness is ethically problematic from (i) a refined consequentialist point of view that any means cannot be (...) justified by any ends, and (ii) a rights view according to which individuals should never be treated as a mere means to ends. The Parkfield experiment, the so-called paradigm case of cooperation between natural and social scientists and the political authorities in hazard management and risk communication, is also open to similar ethical criticism. For the people in the Parkfield area were not informed that the whole experiment was based on a contested seismological paradigm. (shrink)

The fact that it takes time for the brain to process information from the changing environment underlies many experimental phenomena of awareness of spatiotemporal events, including a number of astonishing illusions. These phenomena have been explained from the predictive and postdictive theoretical perspectives. Here I describe the most extensively studied phenomena in order to see how well the two perspectives can explain them. Next, the neurobiological perceptual retouch mechanism of producing stimulation awareness is characterized and its work in causing the (...) listed illusions is described. A perspective on how brain mechanisms of conscious perception produce the phenomena supportive of the postdictive view is presented in this article. At the same time, some of the phenomena cannot be explained by the traditional postdictive account, but can be interpreted from the perceptual retouch theory perspective. (shrink)

All major research ethics policies assert that the ethical review of clinical trial protocols should include a systematic assessment of risks and benefits. But despite this policy, protocols do not typically contain explicit probability statements about the likely risks or benefits involved in the proposed research. In this essay, I articulate a range of ethical and epistemic advantages that explicit forecasting would offer to the health research enterprise. I then consider how some particular confidence levels may come into conflict with (...) the principles of ethical research. (shrink)

When we make a prediction we select, among the conceivable future descriptions of the world, those that appear to us to be most plausible. We capture this by means of two binary relations, ≺c and ≺p: if t1 and t2 are points in time, we interpret t1 ≺ct2 as sayingthat t2 is in the conceivable future of t1, while t1 ≺pt2 is interpreted to mean that t2 isin the predicted future of t1. Within a branching-time framework we propose the (...) following notion of “consistency of prediction”. Suppose that at t1 some future moment t2 is predicted to occur, then every moment t on the unique path from t1 to t2 should also be predicted at t1 and the prediction of t2 should continue to hold at every such t. A sound and complete axiomatization is provided. (shrink)