This thesis considers two allegations which conservatives often level at no-fault systems — namely, that responsibility is abnegated under no-fault systems, and that no-fault systems under- and over-compensate. I argue that although each of these allegations can be satisfactorily met – the responsibility allegation rests on the mistaken assumption that to properly take responsibility for our actions we must accept liability for those losses for which we are causally responsible; and the compensation allegation rests on the mistaken assumption that tort (...) law’s compensatory decisions provide a legitimate norm against which no-fault’s decisions can be compared and criticized – doing so leads in a direction which is at odds with accident law reform advocates’ typical recommendations. On my account, accident law should not just be reformed in line with no-fault’s principles, but rather it should be completely abandoned since the principles that protect no- fault systems from the conservatives’ two allegations are incompatible with retaining the category of accident law, they entail that no-fault systems are a form of social welfare and not accident law systems, and that under these systems serious deprivation – and to a lesser extent causal responsibility – should be conditions of eligibility to claim benefits. (shrink)

In "Torts, Egalitarianism and Distributive Justice" , Tsachi Keren-Paz presents impressingly detailed analysis that bolsters the case in favour of incremental tort law reform. However, although this book's greatest strength is the depth of analysis offered, at the same time supporters of radical law reform proposals may interpret the complexity of the solution that is offered as conclusive proof that tort law can only take adequate account of egalitarian aims at an unacceptably high cost.

Luck egalitarians think that considerations of responsibility can excuse departures from strict equality. However critics argue that allowing responsibility to play this role has objectionably harsh consequences. Luck egalitarians usually respond either by explaining why that harshness is not excessive, or by identifying allegedly legitimate exclusions from the default responsibility-tracking rule to tone down that harshness. And in response, critics respectively deny that this harshness is not excessive, or they argue that those exclusions would be ineffective or lacking in justification. (...) Rather than taking sides, after criticizing both positions I also argue that this way of carrying on the debate – i.e. as a debate about whether the harsh demands of responsibility outweigh other considerations, and about whether exclusions to responsibility-tracking would be effective and/or justified – is deeply problematic. On my account, the demands of responsibility do not – in fact, they can not – conflict with the demands of other normative considerations, because responsibility only provides a formal structure within which those other considerations determine how people may be treated, but it does not generate its own practical demands. (shrink)

Egalitarians must address two questions: i. What should there be an equality of, which concerns the currency of the ‘equalisandum’; and ii. How should this thing be allocated to achieve the so-called equal distribution? A plausible initial composite answer to these two questions is that resources should be allocated in accordance with choice, because this way the resulting distribution of the said equalisandum will ‘track responsibility’ — responsibility will be tracked in the sense that only we will be responsible for (...) the resources that are available to us, since our allocation of resources will be a consequence of our own choices. But the effects of actual choices should not be preserved until the prior effects of luck in constitution and circumstance are first eliminated. For instance, people can choose badly because their choice-making capacity was compromised due to a lack of intelligence (i.e. due to constitutional bad luck), or because only bad options were open to them (i.e. due to circumstantial bad luck), and under such conditions we are not responsible for our choices. So perhaps a better composite answer to our two questions (from the perspective of tracking responsibility) might be that resources should be allocated so as to reflect people’s choices, but only once those choices have been corrected for the distorting effects of constitutional and circumstantial luck, and on this account choice preservation and luck elimination are two complementary aims of the egalitarian ideal. Nevertheless, it is one thing to say that luck’s effects should be eliminated, but quite another to figure out just how much resource redistribution would be required to achieve this outcome, and so it was precisely for this purpose that in 1981 Ronald Dworkin developed the ingenuous hypothetical insurance market argumentative device (HIMAD), which he then used in conjunction with the talent slavery (TS) argument, to arrive at an estimate of the amount of redistribution that would be required to reduce the extent of luck’s effects. However recently Daniel Markovits has cast doubt over Dworkin’s estimates of the amount of redistribution that would be required, by pointing out flaws with his understanding of how the hypothetical insurance market would function. Nevertheless, Markovits patched it up and he used this patched-up version of Dworkin’s HIMAD together with his own version of the TS argument to reach his own conservative estimate of how much redistribution there ought to be in an egalitarian society. Notably though, on Markovits’ account once the HIMAD is patched-up and properly understood, the TS argument will also allegedly show that the two aims of egalitarianism are not necessarily complementary, but rather that they can actually compete with one another. According to his own ‘equal-agent’ egalitarian theory, the aim of choice preservation is more important than the aim of luck elimination, and so he alleges that when the latter aim comes into conflict with the former aim then the latter will need to be sacrificed to ensure that people are not subordinated to one another as agents. I believe that Markovits’ critique of Dworkin is spot on, but I also think that his own positive thesis — and hence his conclusion about how much redistribution there ought to be in an egalitarian society — is flawed. Hence, this paper will begin in Section I by explaining how Dworkin uses the HIMAD and his TS argument to estimate the amount of redistribution that there ought to be in an egalitarian society — this section will be largely expository in content. Markovits’ critique of Dworkin will then be outlined in Section II, as will be his own positive thesis. My critique of Markovits, and my own positive thesis, will then make a fleeting appearance in Section III. Finally, I will conclude by rejecting both Dworkin’s and Markovits’ estimates of the amount of redistribution that there ought to be in an egalitarian society, and by reaffirming the responsibility-tracking egalitarian claim that choice preservation and luck elimination are complementary and not competing egalitarian aims. (shrink)

It could be argued that tort law is failing, and arguably an example of this failure is the recent public liability and insurance (‘PL&I’) crisis. A number of solutions have been proposed, but ultimately the chosen solution should address whatever we take to be the cause of this failure. On one account, the PL&I crisis is a result of an unwarranted expansion of the scope of tort law. Proponents of this position sometimes argue that the duty of care owed by (...) defendants to plaintiffs has expanded beyond reasonable levels, such that parties who were not really responsible for another’s misfortune are successfully sued, while those who really were to blame get away without taking any responsibility. However people should take responsibility for their actions, and the only likely consequence of allowing them to shirk it is that they and others will be less likely to exercise due care in the future, since the deterrents of liability and of no compensation for accidentally self-imposed losses will not be there. Others also argue that this expansion is not warranted because it is inappropriately fueled by ‘deep pocket’ considerations rather than by considerations of fault. They argue that the presence of liability insurance sways the judiciary to award damages against defendants since they know that insurers, and not the defendant personally, will pay for it in the end anyway. But although it may seem that no real person has to bear these burdens when they are imposed onto insurers, in reality all of society bears them collectively when insurers are forced to hike their premiums to cover these increasing damages payments. In any case, it seems unfair to force insurers to cover these costs simply because they can afford to do so. If such an expansion is indeed the cause of the PL&I crisis, then a contraction of the scope of tort liability, and a pious return to the fault principle, might remedy the situation. However it could also be argued that inadequate deterrence is the cause of this crisis. On this account the problem would lie not with the tort system’s continued unwarranted expansion, but in the fact that defendants really have been too careless. If prospective injurers were appropriately deterred from engaging in unnecessarily risky activities, then fewer accidents would ever occur in the first place, and this would reduce the need for litigation at its very source. If we take this to be the cause of tort law’s failure then our solution should aim to improve deterrence. Glen Robinson has argued that improved deterrence could be achieved if plaintiffs were allowed to sue defendants for wrongful exposure to ongoing risks of future harm, even in the absence of currently materialized losses. He argues that at least in toxic injury type cases the tortious creation of risk [should be seen as] an appropriate basis of liability, with damages being assessed according to the value of the risk, as an alternative to forcing risk victims to abide the outcome of the event and seek damages only if and when harm materializes. In a sense, Robinson wishes to treat newly-acquired wrongful risks as de-facto wrongful losses, and these are what would be compensated in liability for risk creation (‘LFRC’) cases. Robinson argues that if the extent of damages were fixed to the extent of risk exposure, all detected unreasonable risk creators would be forced to bear the costs of their activities, rather than only those who could be found responsible for another’s injuries ‘on the balance of probabilities’. The incidence of accidents should decrease as a result of improved deterrence, reduce the ‘suing fest’, and so resolve the PL&I crisis. So whilst the first solution involves contracting the scope of tort liability, Robinson’s solution involves an expansion of its scope. However Robinson acknowledges that LFRC seems prima facie incompatible with current tort principles which in the least require the presence of plaintiff losses, defendant fault, and causation to be established before making defendants liable for plaintiffs’ compensation. Since losses would be absent in LFRC cases by definition, the first evidentiary requirement would always be frustrated, and in its absence proof of defendant fault and causation would also seem scant. If such an expansion of tort liability were not supported by current tort principles then it would be no better than proposals to switch accident law across to no-fault, since both solutions would require comprehensive legal reform. However Robinson argues that the above three evidentiary requirements could be met in LFRC cases to the same extent that they are met in other currently accepted cases, and hence that his solution would therefore be preferable to no-fault solutions as it would only require incremental but not comprehensive legal reform. Although I believe that actual losses should be present before allowing plaintiffs to seek compensation, I will not present a positive argument for this conclusion. My aim in this paper is not to debate the relative merits of Robinson’s solution as compared to no-fault solutions, nor to determine which account of the cause of the PL&I crisis is closer to the truth, but rather to find out whether Robinson’s solution would indeed require less radical legal reform than, for example, proposed no-fault solutions. I will argue that Robinson fails to show that current tort principles would support his proposed solution, and hence that his solution is at best on an even footing with no-fault solutions since both would require comprehensive legal reform. (shrink)

This paper centres on the question as to whether human rights can be reconciled with patriotism. It lays out the more conventional arguments which perceive them as incommensurable concepts. A central aspect of this incommensurability relates to the close historical tie between patriotism and the state. One further dimension of this argument is then articulated, namely, the contention that patriotism is an explicitly political concept. The implicit antagonism between, on the one hand, the state, politics and patriotism, and, on the (...) other hand, human rights, is illustrated via the work of Carl Schmitt. However, in the last few decades there has been a resurgence of interest in patriotism and an attempt to formulate a more moderate form, which tries to reconcile itself with universal ethical themes. Some of these arguments are briefly summarised; the discussion then focuses on Jürgen Habermas’s understanding of constitutional patriotism. This is seen to provide an effective response to Schmitt’s arguments. There are weaknesses in the constitutional patriotic argument which relate to its limited understanding of both the state and politics. This leads me to formulate my own argument for “unpatriotic patriotism.” The discussion then examines and responds to certain potential criticisms of this argument. (shrink)

Garrath Williams claims that truly responsible people must possess a “capacity … to respond [appropriately] to normative demands” (2008:462). However, there are people whom we would normally praise for their responsibility despite the fact that they do not yet possess such a capacity (e.g. consistently well-behaved young children), and others who have such capacity but who are still patently irresponsible (e.g. some badly-behaved adults). Thus, I argue that to qualify for the accolade “a responsible person” one need not possess such (...) a capacity, but only to be earnestly willing to do the right thing and to have a history that testifies to this willingness. Although we may have good reasons to prefer to have such a capacity ourselves, and to associate ourselves with others who have it, at a conceptual level I do not think that such considerations support the claim that having this capacity is a necessary condition of being a responsible person in the virtue sense. (shrink)

Third-party property insurance (TPPI) protects insured drivers who accidentally damage an expensive car from the threat of financial ruin. Perhaps more importantly though, TPPI also protects the victims whose losses might otherwise go uncompensated. Ought responsible drivers therefore take out TPPI? This paper begins by enumerating some reasons for why a rational person might believe that they have a moral obligation to take out TPPI. It will be argued that if what is at stake in taking responsibility is the ability (...) to compensate our possible future victims for their losses, then it might initially seem that most people should be thankful for the availability of relatively inexpensive TPPI because without it they may not have sufficient funds to do the right thing and compensate their victims in the event of an accident. But is the ability to compensate one's victims really what is at stake in taking responsibility? The second part of this paper will critically examine the arguments for the above position, and it will argue that these arguments do not support the conclusion that injurers should compensate their victims for their losses, and hence that drivers need not take out TPPI in order to be responsible. Further still, even if these arguments did support the conclusion that injurers should compensate their victims for their losses, then (perhaps surprisingly) nobody should to be allowed to take out TPPI because doing so would frustrate justice. (shrink)

Could neuroimaging evidence help us to assess the degree of a person’s responsibility for a crime which we know that they committed? This essay defends an affirmative answer to this question. A range of standard objections to this high-tech approach to assessing people’s responsibility is considered and then set aside, but I also bring to light and then reject a novel objection—an objection which is only encountered when functional (rather than structural) neuroimaging is used to assess people’s responsibility.

This is a report on the 3-day workshop “The Neuroscience of Responsibility” that was held in the Philosophy Department at Delft University of Technology in The Netherlands during February 11th–13th, 2010. The workshop had 25 participants from The Netherlands, Germany, Italy, UK, USA, Canada and Australia, with expertise in philosophy, neuroscience, psychology, psychiatry and law. Its aim was to identify current trends in neurolaw research related specifically to the topic of responsibility, and to foster international collaborative research on this topic. (...) The workshop agenda was constructed by the participants at the start of each day by surveying the topics of greatest interest and relevance to participants. In what follows, we summarize (1) the questions which participants identified as most important for future research in this field, (2) the most prominent themes that emerged from the discussions, and (3) the two main international collaborative research project plans that came out of this meeting. (shrink)

The way in which we characterize the structural and functional differences between psychopath and normal brains – either as biological disorders or as mere biological differences – can influence our judgments about psychopaths’ responsibility for criminal misconduct. However, Marga Reimer (Neuroethics 1(2):14, 2008) points out that whether our characterization of these differences should be allowed to affect our judgments in this manner “is a difficult and important question that really needs to be addressed before policies regarding responsibility... can be implemented (...) with any confidence”. This paper is an attempt to address Reimer’s difficult and important question; I argue that irrespective of which of these two characterizations is chosen, our judgments about psychopaths’ responsibility should not be affected, because responsibility hinges not on whether a particular difference is (referred to as) a disorder or not, but on how that difference affects the mental capacities required for moral agency. (shrink)

In the field of ?neurolaw?, reformists claim that recent scientific discoveries from the mind sciences have serious ramifications for how legal responsibility should be adjudicated, but conservatives deny that this is so. In contrast, I criticise both of these polar opposite positions by arguing that although scientific findings can have often-weighty normative significance, they lack the normative authority with which reformists often imbue them. After explaining why conservatives and reformists are both wrong, I then offer my own moderate suggestions about (...) what views we have reason to endorse. My moderate position reflects the familiar capacitarian idea which underlies much lay, legal, and philosophical thinking about responsibility ? namely, that responsibility tracks mental capacity. (shrink)

Nationalism has had a complex relation with the discipline of political theory during the 20th century. Political theory has often been deeply uneasy with nationalism in relation to its role in the events leading up to and during the Second World War. Many theorists saw nationalism as an overly narrow and potentially irrationalist doctrine. In essence it embodied a closed vision of the world. This article focuses on one key contributor to the immediate post-war debate—Karl Popper—who retained deep misgivings about (...) nationalism until the end of his life, and indeed saw the events of the early 1990s (shortly before his death) as a confirmation of this distrust. Popper was one of a number of immediate post war writers, such as Friedrich Hayek and Ludwig von Mises, who shared this unease with nationalism. They all had a powerful effect on social and political thought in the English-speaking world. Popper particularly articulated a deeply influential perspective that fortuitously encapsulated a cold war mentality in the 1950s. In 2005 Popper's critical views are doubly interesting, since the last decade has seen a renaissance of nationalist interests. The collapse of the Berlin wall in 1989, and the changing political landscape of international and domestic politics, has seen once again a massive growth of interest in nationalism, particularly from liberal political theorists and a growing, and, at times, immensely enthusiastic academic literature, trying to provide a distinctively benign benediction to nationalism. (shrink)

New concepts may prove necessary to profit from the avalanche of sequence data on the genome, transcriptome, proteome and interactome and to relate this information to cell physiology. Here, we focus on the concept of large activity-based structures, or hyperstructures, in which a variety of types of molecules are brought together to perform a function. We review the evidence for the existence of hyperstructures responsible for the initiation of DNA replication, the sequestration of newly replicated origins of replication, cell division (...) and for metabolism. The processes responsible for hyperstructure formation include changes in enzyme affinities due to metabolite-induction, lipid-protein affinities, elevated local concentrations of proteins and their binding sites on DNA and RNA, and transertion. Experimental techniques exist that can be used to study hyperstructures and we review some of the ones less familiar to biologists. Finally, we speculate on how a variety of in silico approaches involving cellular automata and multi-agent systems could be combined to develop new concepts in the form of an Integrated cell (I-cell) which would undergo selection for growth and survival in a world of artificial microbiology. (shrink)

If code is law then standards bodies are governments. This flawed but powerful metaphor suggests the need to examine more closely those standards bodies that are defining standards for the Internet. In this paper we examine the International Telecommunications Union, the Institute for Electrical and Electronics Engineers Standards Association, the Internet Engineering Task Force, and the World Wide Web Consortium. We compare the organizations on the basis of participation, transparency, authority, openness, security and interoperability. We conclude that the IETF and (...) the W3C are becoming increasingly similar. We also conclude that the classical distinction between standards and implementations is decreasingly useful as standards are embodies in code – itself a form of speech or documentation. Recent Internet standards bodies have flourished in part by discarding or modifying the implementation/standards distinction. We illustrate that no single model is superior on all dimensions. The IETF is not effectively scaling, struggling with its explosive growth with the creation of thousands of working groups. The IETF coordinating body, the Internet Society, addressed growth by reorganization that removed democratic oversight. The W3C, initially the most closed, is becoming responsive to criticism and now includes open code participants. The IEEE SA and ITU have institutional controls appropriate for hardware but too constraining for code. Each organization has much to learn from the others. (shrink)

This exploratory ethics study of a publication and presentation practice herein defined as streaming investigates the attitudes of deans of schools of business and business professors regarding such behavior. Streaming publications is the practice of presenting or publishing an article at one outlet and then taking the same article with perhaps minor revisions and presenting or publishing it at another publication outlet. The results of the survey suggest that the most important ethical behavior regarding streaming practices is disclosure. If authors (...) fully disclose the intellectual history of a paper's developmental process, allegations of possible professional misconduct will be minimized if not eliminated. (shrink)

It is conventional to think of modernity as being characterised by the irremediable separation of philosophy and theology, of reason and faith. Failing to reconsider the idea of such a divorce, post-modernity has pushed this postulate to its very limits by attempting to abolish all types of normativity whether on the grounds of reason or any other basis. Against these prevailing conceptions, we argue that there exist, within philosophy and theology, processes of differentiation as well as original combinations. To illustrate (...) the possibility of mutually enriching exchanges between the philosophical and the theological ethical traditions we will call upon the historical example of solidarism. This will enable us to show that the two traditions are not so heterogeneous as may be first thought by those who underestimate the importance of identifying the conditions, both pragmatic and ideological, that govern practical in situation ethical judgements. (shrink)

For living beings, information is as fundamental as matter or energy. In this paper we show: a) inadequacies of quantitative theories of information, b) how a qualitative analysis leads to a classification of information systems and to a modelling of intercellular communication.From a quantitative point of view, the application in biology of information theories borrowed from communication techniques proved to be disappointing. These theories ignore deliberately the significance of messages, and do not give any definition of information. They refer to (...) quantities, based upon arbitrarily defined probabilistic events. Probability is subjective. The receiver of the message needs to have meta-knowledge of the events. The quantity of information depends on language, coding, and arbitrary definition of disorder. The suggested objectivity is fallacious. (shrink)