This book presents an attempt to develop a theory of knowledge and a philosophy of mind using ideas derived from the mathematical theory of communication developed by Claude Shannon. Information is seen as an objective commodity defined by the dependency relations between distinct events. Knowledge is then analyzed as information caused belief. Perception is the delivery of information in analog form for conceptual utilization by cognitive mechanisms. The final chapters attempt to develop a theory of meaning by (...) viewing meaning as a certain kind of information-carrying role. (shrink)

Luciano Floridi presents a book that will set the agenda for the philosophy of information. PI is the philosophical field concerned with the critical investigation of the conceptual nature and basic principles of information, including its dynamics, utilisation, and sciences, and the elaboration and application of information-theoretic and computational methodologies to philosophical problems. This book lays down, for the first time, the conceptual foundations for this new area of research. It does so systematically, by pursuing three goals. (...) Its metatheoretical goal is to describe what the philosophy of information is, its problems, approaches, and methods. Its introductory goal is to help the reader to gain a better grasp of the complex and multifarious nature of the various concepts and phenomena related to information. Its analytic goal is to answer several key theoretical questions of great philosophical interest, arising from the investigation of semantic information. (shrink)

Investigations of the function of consciousness in human information processing have focused mainly on two questions: (1) where does consciousness enter into the information processing sequence and (2) how does conscious processing differ from preconscious and unconscious processing. Input analysis is thought to be initially "preconscious," "pre-attentive," fast, involuntary, and automatic. This is followed by "conscious," "focal-attentive" analysis which is relatively slow, voluntary, and flexible. It is thought that simple, familiar stimuli can be identified preconsciously, but conscious processing (...) is needed to identify complex, novel stimuli. Conscious processing has also been thought to be necessary for choice, learning and memory, and the organization of complex, novel responses, particularly those requiring planning, reflection, or creativity. (shrink)

The essential difficulty about Computer Ethics' (CE) philosophical status is a methodological problem: standard ethical theories cannot easily be adapted to deal with CE-problems, which appear to strain their conceptual resources, and CE requires a conceptual foundation as an ethical theory. Information Ethics (IE), the philosophical foundational counterpart of CE, can be seen as a particular case of environmental ethics or ethics of the infosphere. What is good for an information entity and the infosphere in general? This is (...) the ethical question asked by IE. The answer is provided by a minimalist theory of deseerts: IE argues that there is something more elementary and fundamental than life and pain, namely being, understood as information, and entropy, and that any information entity is to be recognised as the centre of a minimal moral claim, which deserves recognition and should help to regulate the implementation of any information process involving it. IE can provide a valuable perspective from which to approach, with insight and adequate discernment, not only moral problems in CE, but also the whole range of conceptual and moral phenomena that form the ethical discourse. (shrink)

In this paper we introduce Dynamic Epistemic Logic, which is alogic for reasoning about information change in a multi-agent system. Theinformation structures we use are based on non-well-founded sets, and canbe conceived as bisimulation classes of Kripke models. On these structures,we define a notion of information change that is inspired by UpdateSemantics (Veltman, 1996). We give a sound and complete axiomatization ofthe resulting logic, and we discuss applications to the puzzle of the dirtychildren, and to knowledge programs.

The paper investigates the ethics of information transparency (henceforth transparency). It argues that transparency is not an ethical principle in itself but a pro-ethical condition for enabling or impairing other ethical practices or principles. A new definition of transparency is offered in order to take into account the dynamics of information production and the differences between data and information. It is then argued that the proposed definition provides a better understanding of what sort of information should (...) be disclosed and what sort of information should be used in order to implement and make effective the ethical practices and principles to which an organisation is committed. The concepts of “heterogeneous organisation” and “autonomous computational artefact” are further defined in order to clarify the ethical implications of the technology used in implementing information transparency. It is argued that explicit ethical designs, which describe how ethical principles are embedded into the practice of software design, would represent valuable information that could be disclosed by organisations in order to support their ethical standing. (shrink)

What is the most general common set ofattributes that characterises something asintrinsically valuableand hence as subject to some moral respect, andwithout which something would rightly beconsidered intrinsically worthless or even positivelyunworthy and therefore rightly to bedisrespected in itself? Thispaper develops and supports the thesis that theminimal condition of possibility of an entity'sleast intrinsic value is to be identified with itsontological status as an information object.All entities, even when interpreted as only clusters ofinformation, still have a minimal moral worthqua (...) class='Hi'>information objects and so may deserve to be respected. Thepaper is organised into four main sections.Section 1 models moral action as an information systemusing the object-oriented programmingmethodology (OOP). Section 2 addresses the question of whatrole the several components constituting themoral system can have in an ethical analysis. If theycan play only an instrumental role, thenComputer Ethics (CE) is probably bound to remain at most apractical, field-dependent, applied orprofessional ethics. However, Computer Ethics can give rise to amacroethical approach, namely InformationEthics (IE), if one can show that ethical concern should beextended to include not only human, animal orbiological entities, but also information objects. Thefollowing two sections show how this minimalistlevel of analysis can be achieved. Section 3 provides anaxiological analysis of information objects. Itcriticises the Kantian approach to the concept ofintrinsic value and shows that it can beimproved by using the methodology introduced in the first section.The solution of the Kantian problem prompts thereformulation of the key question concerningthe moral worth of an entity: what is theintrinsic value of x qua an object constituted by itsinherited attributes? In answering thisquestion, it is argued that entitiescan share different observable propertiesdepending on the level of abstraction adopted,and that it is still possible to speak of moral value even at thehighest level of ontological abstractionrepresented by the informational analysis. Section 4 develops aminimalist axiology based on the concept ofinformation object. It further supports IE's position byaddressing five objections that may undermineits acceptability. (shrink)

Ethical reflection on drone fighting suggests that this practice does not only create physical distance, but also moral distance: far removed from one’s opponent, it becomes easier to kill. This paper discusses this thesis, frames it as a moral-epistemological problem, and explores the role of information technology in bridging and creating distance. Inspired by a broad range of conceptual and empirical resources including ethics of robotics, psychology, phenomenology, and media reports, it is first argued that drone fighting, like other (...) long-range fighting, creates epistemic and moral distance in so far as ‘screenfighting’ implies the disappearance of the vulnerable face and body of the opponent and thus removes moral-psychological barriers to killing. However, the paper also shows that this influence is at least weakened by current surveillance technologies, which make possible a kind of ‘empathic bridging’ by which the fighter’s opponent on the ground is re-humanized, re-faced, and re-embodied. This ‘mutation’ or unintended ‘hacking’ of the practice is a problem for drone pilots and for those who order them to kill, but revealing its moral-epistemic possibilities opens up new avenues for imagining morally better ways of technology-mediated fighting. (shrink)

Within a given conversation or information exchange, do privacy expectations change based on the technology used? Firms regularly require users, customers, and employees to shift existing relationships onto new information technology, yet little is known as about how technology impacts established privacy expectations and norms. Coworkers are asked to use new information technology, users of gmail are asked to use GoogleBuzz, patients and doctors are asked to record health records online, etc. Understanding how privacy expectations change, if (...) at all, and the mechanisms by which such a variance is produced will help organizations make such transitions. This paper examines whether and how privacy expectations change based on the technological platform of an information exchange. The results suggest that privacy expectations are significantly distinct when the information exchange is located on a novel technology as compared to a more established technology. Furthermore, this difference is best explained when modeled by a shift in privacy expectations rather than fully technology-specific privacy norms. These results suggest that privacy expectations online are connected to privacy offline with a different base privacy expectation. Surprisingly, out of the five locations tested, respondents consistently assign information on email the greatest privacy protection. In addition, while undergraduate students differ from non-undergraduates when assessing a social networking site, no difference is found when judging an exchange on email. In sum, the findings suggest that novel technology may introduce temporary conceptual muddles rather than permanent privacy vacuums. The results reported here challenge conventional views about how privacy expectations differ online versus offline. Traditionally, management scholarship examines privacy online or with a specific new technology platform in isolation and without reference to the same information exchange offline. However, in the present study, individuals appear to have a shift in their privacy expectations but retain similar factors and their relative importance—the privacy equation by which they form judgments—across technologies. These findings suggest that privacy scholarship should make use of existing privacy norms within contexts when analyzing and studying privacy in a new technological platform. (shrink)

The paper presents, firstly, a brief review of the long history\nof information ethics beginning with the Greek concept of parrhesia\nor freedom of speech as analyzed by Michel Foucault. The recent concept\nof information ethics is related particularly to problems which arose\nin the last century with the development of computer technology and\nthe internet. A broader concept of information ethics as dealing\nwith the digital reconstruction of all possible phenomena leads to\nquestions relating to digital ontology. Following Heidegger{\textquoteright}s\nconception of the relation between (...) ontology and metaphysics, the\nauthor argues that ontology has to do with Being itself and not just\nwith the Being of beings which is the matter of metaphysics. The\nprimary aim of an ontological foundation of information ethics is\nto question the metaphysical ambitions of digital ontology understood\nas today{\textquoteright}s pervading understanding of Being. The\nauthor analyzes some challenges of digital technology, particularly\nwith regard to the moral status of digital agents. The author argues\nthat information ethics does not only deal with ethical questions\nrelating to the infosphere. This view is contrasted with arguments\npresented by Luciano Floridi on the foundation of information ethics\nas well as on the moral status of digital agents. It is argued that\na reductionist view of the human body as digital data overlooks the\nlimits of digital ontology and gives up one basis for ethical orientation.\nFinally issues related to the digital divide as well as to intercultural\naspects of information ethics are explored {\textendash} and long\nand short-term agendas for appropriate responses are presented. (shrink)

Floridi’s ontocentric ethics is compared with Spinoza’s ethical and metaphysical system as found in the Ethics. Floridi’s is a naturalistic ethics where he argues that an action is right or wrong primarily because the action does decrease the ‹entropy’ of the infosphere or not. An action that decreases the amount entropy of the infosphere is a good one, and one that increases it is a bad one. For Floridi, ‹entropy’ refers to destruction or loss of diversity of the infosphere, or (...) the total reality consisting of informational objects. The similarity with Spinoza is that both philosophers refer to basic reality as a foundation for normative judgments. Hence they are both ethical naturalists. An interpretation of both Floridi and Spinoza is offered that might begin to solve the basic problems for any naturalistic ethics. The problems are how a value theory that is based on metaphysics could maintain normative force and how normative force could be justified when there appear to be widely differing metaphysical systems according to the many cultural traditions. I argue that in Spinoza’s and presumably in Floridi’s system, there is no separation between the normative and the natural from the beginning. Normative terms derive their validity from their role in referring to action that leads to a richer and fuller reality. As for the second problem, Spinoza’s God is such that He cannot be fully described by mere finite intellect. What this translates to the contemporary situation of information ethics is that there are always bound to be many different ways of conceptualizing one and the same reality, and it is the people’s needs, goals and desires that often dictate how the conceptualizing is done. However, when different groups of people interact, these systems become calibrated with one another. This is possible because they already belong to the same reality. (shrink)

This paper examines the question whether, and to what extent, John Locke’s classic theory of property can be applied to the current debate involving intellectual property rights (IPRs) and the information commons. Organized into four main sections, Section 1 includes a brief exposition of Locke’s arguments for the just appropriation of physical objects and tangible property. In Section 2, I consider some challenges involved in extending Locke’s labor theory of property to the debate about IPRs and digital information. (...) In Section 3, it is argued that even if the labor analogy breaks down, we should not necessarily infer that Locke’s theory has no relevance for the contemporary debate involving IPRs and the information commons. Alternatively, I argue that much of what Locke has to say about the kinds of considerations that ought to be accorded to the physical commons when appropriating objects from it – especially his proviso requiring that “enough and as good” be left for others – can also be applied to appropriations involving the information commons. Based on my reading of Locke’s proviso, I further argue that Locke would presume in favor of the information commons when competing interests (involving the rights of individual appropriators and the preservation of the commons) are at stake. In this sense, I believe that Locke offers us an adjudicative principle for evaluating the claims advanced by rival interests in the contemporary debate about IPRs and the information commons. In Section 4, I apply Locke’s proviso in my analysis of two recent copyright laws: the Copyright Term Extension Act (CTEA), and the Digital Millennium Copyright Act (DMCA). I then argue that both laws violate the spirit of Locke’s proviso because they unfairly restrict the access that ordinary individuals have previously had to resources that comprise the information commons. Noting that Locke would not altogether reject copyright protection for IPRs, I conclude that Locke’s classic property theory provides a useful mechanism for adjudicating between claims about how best to ensure that individuals will be able to continue to access information in digitized form, while at the same time also allowing for that information to enjoy some form of legal protection. (shrink)

This paper presents the first bibliometric mapping analysis of the field of computer and information ethics (C&IE). It provides a map of the relations between 400 key terms in the field. This term map can be used to get an overview of concepts and topics in the field and to identify relations between information and communication technology concepts on the one hand and ethical concepts on the other hand. To produce the term map, a data set of over (...) thousand articles published in leading journals and conference proceedings in the C&IE field was constructed. With the help of various computer algorithms, key terms were identified in the titles and abstracts of the articles and co-occurrence frequencies of these key terms were calculated. Based on the co-occurrence frequencies, the term map was constructed. This was done using a computer program called VOSviewer. The term map provides a visual representation of the C&IE field and, more specifically, of the organization of the field around three main concepts, namely privacy, ethics, and the Internet. (shrink)

I describe the emergence of Floridi’s philosophy of information (PI) and information ethics (IE) against the larger backdrop of Information and Computer Ethics (ICE). Among their many strengths, PI and IE offer promising metaphysical and ethical frameworks for a global ICE that holds together globally shared norms with the irreducible differences that define local cultural and ethical traditions. I then review the major defenses and critiques of PI and IE offered by contributors to this special issue, and (...) highlight Floridi’s responses to especially two central problems – the charge of relativism and the meaning of ‹entropy’ in IE. These responses, conjoined with several elaborations of PI and IE offered here by diverse contributors, including important connections with the naturalistic philosophies of Spinoza and other major Western and Eastern figures, thus issue in an expanded and more refined version of PI and IE – one still facing important questions as well as possibilities for further development. (shrink)

There is surprisingly little attention in Information Technology ethics to respect for persons, either as an ethical issue or as a core value of IT ethics or as a conceptual tool for discussing ethical issues of IT. In this, IT ethics is very different from another field of applied ethics, bioethics, where respect is a core value and conceptual tool. This paper argues that there is value in thinking about ethical issues related to information technologies, especially, though not (...) exclusively, issues concerning identity and identity management, explicitly in terms of respect for persons understood as a core value of IT ethics. After explicating respect for persons, the paper identifies a number of ways in which putting the concept of respect for persons explicitly at the center of both IT practice and IT ethics could be valuable, then examines some of the implicit and problematic assumptions about persons, their identities, and respect that are built into the design, implementation, and use of information technologies and are taken for granted in discussions in IT ethics. The discussion concludes by asking how better conceptions of respect for persons might be better employed in IT contexts or brought better to bear on specific issues concerning identity in IT contexts. (shrink)

This paper provides a general philosophical groundwork for the theoretical and applied normative evaluation of information generally and digital information specifically in relation to the good life. The overall aim of the paper is to address the question of how Information Ethics and computer ethics more generally can be expanded to include more centrally the issue of how and to what extent information relates and contributes to the quality of life or the good life , for (...) individuals and for society. To answer that question, the paper explores and provides by way of a theoretical groundwork for further research, the concept of wisdom understood as a type of meta - knowledge as well as a type of meta - virtue , which can enable one to both know in principle what a good life is and how to successfully apply that knowledge in living such a life in practice. This answer will be based on the main argument presented in this paper that the notion of wisdom understood as being at once a meta - epistemological , meta - axiological and meta - eudemonic concept, provides the essential conceptual link between information on the one hand and the good life on the other. If, as we are told, this is the Age of Information, both the theoretical examination and analysis of the question of how information relates to the good life and the provision of an adequate answer to that question are essential for developing a deeper understanding of how to evaluate the theoretical and practical implications and ramifications of information for the good life, for individuals and societies generally. (shrink)

Nussbaum’s version of the capability approach is not only a helpful approach to development problems but can also be employed as a general ethical-anthropological framework in ‘advanced’ societies. This paper explores its normative force for evaluating information technologies, with a particular focus on the issue of human enhancement. It suggests that the capability approach can be a useful way of to specify a workable and adequate level of analysis in human enhancement discussions, but argues that any interpretation of what (...) these capabilities mean is itself dependent on (interpretations of) the techno-human practices under discussion. This challenges the capability approach’s means-end dualism concerning the relation between on the one hand technology and on the other hand humans and capabilities. It is argued that instead of facing a choice between development and enhancement, we better reflect on how we want to shape human-technological practices, for instance by using the language of capabilities. For this purpose, we have to engage in a cumbersome hermeneutics that interprets dynamic relations between unstable capabilities, technologies, practices, and values. This requires us to modify the capability approach by highlighting and interpreting its interpretative dimension. (shrink)

In this paper, a critique will be developed and an alternative proposed to Luciano Floridi’s approach to Information Ethics (IE). IE is a macroethical theory that is to both serve as a foundation for computer ethics and to guide our overall moral attitude towards the world. The central claims of IE are that everything that exists can be described as an information object, and that all information objects, qua information objects, have intrinsic value and are therefore (...) deserving of moral respect. In my critique of IE, I will argue that Floridi has presented no convincing arguments that everything that exists has some minimal amount of intrinsic value. I will argue, however, that his theory could be salvaged in large part if it were modified from a value-based into a respect-based theory, according to which many (but not all) inanimate things in the world deserve moral respect, not because of intrinsic value, but because of their (potential) extrinsic, instrumental or emotional value for persons. (shrink)

. Luciano Floridi argues that every existing entity is deserving of at least minimal moral respect in virtue of having intrinsic value qua information object. In this essay, I attempt a comprehensive assessment of this important view as well as the arguments Floridi offers in support of it. I conclude both that the arguments are insufficient and that the thesis itself is substantively implausible from the standpoint of ordinary intuitions.

The primary theme of this paper is the normative case against ownership of one's genetic information along with the source of that information (usually human tissues samples). The argument presented here against such “upstream” property rights is based primarily on utilitarian grounds. This issue has new salience thanks to the Human Genome Project and “bio-prospecting” initiatives based on the aggregation of genetic information, such as the one being managed by deCODE Genetics in Iceland. The rationale for ownership (...) is twofold: ownership will protect the basic human rights of privacy and autonomy and it will enable the data subjects to share in the tangible benefits of the genetic research. Proponents of this viewpoint often cite the principle of genetic exceptionalism, which asserts that genetic information needs a higher level of protection than other kinds of personal information such as financial data. We argue, however, that the recognition of such ownership rights would lead to inefficiency along with the disutility of genetic discoveries. Biomedical research will be hampered if property rights in genes and genetic material are too extensive. We contend that other mechanisms such as informed consent and strict confidentiality rules can accomplish the same result as a property right without the liabilities of an exclusive entitlement. (shrink)

An important question one can ask of ethical theories is whether and how they aim to raise claims to universality. This refers to the subject area that they intend to describe or govern and also to the question whether they claim to be binding for all (moral) agents. This paper discusses the question of universality of Luciano Floridi’s information ethics (IE). This is done by introducing the theory and discussing its conceptual foundations and applications. The emphasis will be placed (...) on the ontological grounding of IE. IE’s claims to universality will be contrasted with those raised by discourse ethics. This comparison of two pertinent ethical theories allows for a critical discussion of areas where IE currently has room for elaboration and development. (shrink)

The ethics in an information society is discussed from the combined viewpoint of Eastern and Western thoughts. The breakdown of a coherent self threatens the Western ethics and causes nihilism. Francisco Varela, one of the founders of Autopoiesis Theory, tackled this problem and proposed Enactive Cognitive Science by introducing Buddhist middle-way philosophy. Fundamental Informatics gives further insights into the problem, by proposing the concept of a hierarchical autopoietic system. Here the ethics can be described in relation to a community (...) rather than a coherent self. The philosophical bridge between East and West is expected to solve the ethical aporia in the 21st century. (shrink)

In this contribution, we identify and clarifysome distinctions we believe are useful inestablishing the reliability of information onthe Internet. We begin by examining some of thesalient features of information that go intothe determination of reliability. In so doing,we argue that we need to distinguish contentand pedigree criteria of reliability and thatwe need to separate issues of reliability ofinformation from the issues of theaccessibility and the usability of information.We then turn to an analysis of some commonfailures to recognize (...) reliability orunreliability. (shrink)

The information revolution has fostered the rise of new ways of waging war, generally by means of cyberspace-based attacks on the infrastructures upon which modern societies increasingly depend. This new way of war is primarily disruptive, rather than destructive; and its low barriers to entry make it possible for individuals and groups (not just nation-states) easily to acquire very serious war-making capabilities. The less lethal appearance of information warfare and the possibility of cloaking the attacker''s true identity put (...) serious pressure on traditional just war doctrines that call for adherence to the principles of right purpose, duly constituted authority, and last resort. Age-old strictures about noncombatant immunity are also attenuated by the varied means of attack enabled by advanced information technologies. Therefore, the nations and societies leading the information revolution have a primary ethical obligation to constrain the circumstances under which information warfare may be used -- principally by means of a pledge of no first use of such means against noncombatants. (shrink)

Genetic information is becoming increasingly used in modern life, extending beyond medicine to familial history, forensics and more. Following this expansion of use, the effect of genetic information on people’s identity and ultimately people’s quality of life is being explored in a host of different disciplines. While a multidisciplinary approach is commendable and necessary, there is the potential for the multidisciplinarity to produce conceptual misconnection. That is, while experts in one field may understand their use of a term (...) like ‘gene’, ‘identity’ or ‘information’ for experts in another field, the same term may link to a distinctly different concept. These conceptual misconnections not only increase inefficiency in complex organisational practices, but can also have important ethical, legal and social consequences. This paper comes at the problem of conceptual misconnection by clarifying different uses of the terms ‘gene’, ‘identity’ and ‘information’. I start by looking at three different conceptions of the gene; the Instrumental, the Nominal and the Postgenomic Molecular. Secondly, a taxonomy of four different concepts of identity is presented; Numeric, Character, Group and Essentialised, and their use is clarified. A general concept of Information is introduced, and finally three distinct kinds of information are described. I then introduce Concept Creep as an ethical problem that arises from conceptual misconnections. The primary goal of this paper is to reduce the potential for conceptual misconnection when discussing genetic identity and genetic information. This is complimented by three secondary goals—1) to clarify what a conceptual misconnection is, 2) to explain why clarity of use is particularly important to discussions of genes, identity and information and 3) to show how concept creep between different uses of genetic identity and genetic information can have important ethical outcomes. (shrink)

Some propositions add more information to bodies of propositions than do others. We start with intuitive considerations on qualitative comparisons of information added . Central to these are considerations bearing on conjunctions and on negations. We find that we can discern two distinct, incompatible, notions of information added. From the comparative notions we pass to quantitative measurement of information added. In this we borrow heavily from the literature on quantitative representations of qualitative, comparative conditional probability. We (...) look at two ways to obtain a quantitative conception of information added. One, the most direct, mirrors Bernard Koopman’s construction of conditional probability: by making a strong structural assumption, it leads to a measure that is, transparently, some function of a function P which is, formally, an assignment of conditional probability (in fact, a Popper function). P reverses the information added order and mislocates the natural zero of the scale so some transformation of this scale is needed but the derivation of P falls out so readily that no particular transformation suggests itself. The Cox–Good–Aczél method assumes the existence of a quantitative measure matching the qualitative relation, and builds on the structural constraints to obtain a measure of information that can be rescaled as, formally, an assignment of conditional probability. A classical result of Cantor’s, subsequently strengthened by Debreu, goes some way towards justifying the assumption of the existence of a quantitative scale. What the two approaches give us is a pointer towards a novel interpretation of probability as a rescaling of a measure of information added. (shrink)

It has been argued that moral problems in relation to Information Technology (IT) require new theories of ethics. In recent years, an interesting new theory to address such concerns has been proposed, namely the theory of Information Ethics (IE). Despite the promise of IE, the theory has not enjoyed public discussion. The aim of this paper is to initiate such discussion by critically evaluating the theory of IE.

Computing plays an important role in genetics (and vice versa).Theoretically, computing provides a conceptual model for thefunction and malfunction of our genetic machinery. Practically,contemporary computers and robots equipped with advancedalgorithms make the revelation of the complete human genomeimminent – computers are about to reveal our genetic soulsfor the first time. Ethically, computers help protect privacyby restricting access in sophisticated ways to genetic information.But the inexorable fact that computers will increasingly collect,analyze, and disseminate abundant amounts of genetic informationmade available through (...) the genetic revolution, not to mentionthat inexpensive computing devices will make genetic informationgathering easier, underscores the need for strong and immediateprivacy legislation. (shrink)

This article presents an overview of significant issues facing contemporary information professionals. As the world of information continues to grow at unprecedented speed and in unprecedented volume, questions must be faced by information professionals. Will we participate in the worldwide mythology of equal access for all, or will we truly work towards this debatable goal? Will we accept the narrowing of choice for our corresponding increasing diverse clientele? Such questions must be considered in a holistic context and (...) an understanding of the many levels of information inequities is requisite.Beginning with an historical perspective, Buchanan presents Mustapha Masmoudi''s seminal review of forms of information inequities. She then describes qualitative forms of inequities, such as information imperialism and cultural bias embedded in such practices as cataloging and classification. Following, a review of quantitative inequities is presented. Such issues as the growing commoditization of information and information services demand attention from the ethical perspective. And, finally, the Internet and implications surrounding the world-wide dissemination of information is discussed. (shrink)

Is there such a thing as information justice? In this paper, I argue that the current state of the information economy, particularly as it regards information and computing technology (ICT), is unjust, conferring power disproportionately on the information-wealthy at great expense to the information-poor. As ICT becomes the primary method for accessing and manipulating information, it ought to be treated as a foundational layer of the information economy. I argue that by maximizing the (...) liberties (freedom to use, freedom to distribute, freedom to modify, and so on) associated with certain computer software, an incentives-rich and stable environment can be established in ICT that will foster development of the information economy among the information poor. I suggest that the now-mature Free and Open Source Software paradigm, which has already produced widely-used enterprise-class applications, can be harnessed in support of these ends. (shrink)

Computer ethicists have for some years been troubled by the issue of how to assign moral responsibility for disastrous events involving erroneous information generated by expert information systems. Recently, Jeroen van den Hoven has argued that agents working with expert information systems satisfy the conditions for what he calls epistemic enslavement. Epistemically enslaved agents do not, he argues, have moral responsibility for accidents for which they bear causal responsibility. In this article, I develop two objections to van (...) den Hoven’s argument for epistemic enslavement of agents working with expert information systems. (shrink)

The amount of content, both on and offline, to which people in reasonably affluent nations have access has increased to the point that it has raised concerns that we are now suffering from a harmful condition of ‹information overload.’ Although the phrase is being used more frequently, the concept is not yet well understood – beyond expressing the rather basic idea of having access to more information than is good for us. This essay attempts to provide a philosophical (...) explication of the concept of information overload and is therefore what philosophers call ‹conceptual analysis’ – a task that, along with normative ethical analysis, is distinctive to Anglo-American style analytic philosophy. I will begin with an analysis of the atomic concepts expressed by the terms ‹information’ and ‹overload’ and then attempt to give a philosophical explanation of the concept of information overload that more precisely identifies exactly what the condition amounts to. (shrink)

Information received from different sources can be inconsistent. Even when the sources of information can be ordered on the basis of their trustworthiness, it turns out that extracting an acceptable notion of support for information is a non-trivial matter, as is the question what information a rational agent should accept. Here it is shown how a support ordering on the information can be generated and how it can be used to decide what information to (...) accept and what not to accept. This ordering, it turns out, is closely related to notions such as Epistemic Entrenchment and Grove spheres studied in belief revision. (shrink)

Residual categories are those which cannot be formally represented within a given classification system. We examine the forms that residuality takes within our information systems today, and explore some silences which form around those inhabiting particular residual categories. We argue that there is significant ethical and political work to be done in exploring residuality.

The purpose of this paper is to look at some existing methods of semantic information quantification and suggest some alternatives. It begins with an outline of Bar-Hillel and Carnap’s theory of semantic information before going on to look at Floridi’s theory of strongly semantic information. The latter then serves to initiate an in-depth investigation into the idea of utilising the notion of truthlikeness to quantify semantic information. Firstly, a couple of approaches to measure truthlikeness are drawn (...) from the literature and explored, with a focus on their applicability to semantic information quantification. Secondly, a similar but new approach to measure truthlikeness/information is presented and some supplementary points are made. (shrink)

To overcome “digital reductionism,” a new kind of mechanical view on human beings, fundamental informatics provides some critical viewpoints. It regards information as “meaning” generated in living things which do not exist alone but are parts of ecological system. On the other hand, V. E. Frankl proposed two dimensions of humans: homo sapiens and homo patiens. The latter is the essential aspect of humans whose essence is “compassion,” while the former is the nature like a mechanical machine. As features (...) of living things, unrestricted ability of interpretation as well as inseparable relationships between others underlies both in Frankl’s thought and fundamental informatics. This viewpoint can be applied to the concept of “information literacy.”. (shrink)

The commodification of code demands two preconditions: a belief if the existence of code and a system of ownership for the code. An examination of these preconditions is helpful for resisting the further widening of digital divides. The ontological belief in the relatively independent existence of code is dependent on our understanding of what the “digital” is. Here it is claimed that the digital is not a natural kind, but a concept that is relative to our practices of interpretation. An (...) interpretative system that sees code as something that can or should always be owned implies an increase of social control and threatens vital processes of knowledge creation that are necessary for an open and egalitarian information society. The ontological belief in “digital code” thus provides the backdrop for an ethical view of the information society. Consequently, if we see digital code as an interpretative notion (in the nominalist way), the ethical questions appear in a different light. (shrink)

As business environments become more complex and reliant on information systems, the decisions made by managers affect a growing number of stakeholders. This paper proposes a framework based on the application of normative theories in business ethics to facilitate the evaluation of IS related ethical dilemmas and arrive at fair and consistent decisions. The framework is applied in the context of an information privacy dilemma to demonstrate the decision making process. The ethical dilemma is analyzed using each one (...) of the three normative theories—the stockholder theory, stakeholder theory, and social contract theory. The challenges associated with the application of these theories are also discussed. (shrink)

I present two measures of information for both consistentand inconsistent sets of sentences in a finite language ofpropositional logic. The measures of information are based onmeasures of inconsistency developed in Knight (2002).Relative information measures are then provided corresponding to thetwo information measures.

It is a truism that the design and deployment of information and communication technologies is vital to everyday life, the conduct of work and to social order. But how are individual, organisational and societal choices made? What might it mean to invoke a politics and an ethics of information technology design and use? This editorial paper situates these questions within the trajectory of preoccupations and approaches to the design and deployment of information technology since computerisation began in (...) the 1940s. Focusing upon the dominant concerns over the last three decades, the paper delineates an interest in design and use in relation to socio-technical theories, situated practices and actor-network theory. It is argued that each of these approaches is concerned with a particular form of politics that does not explicitly engage with ethics. In order to introduce ethics into contemporary debates about information technology, and to frame the papers in the special issue, it is argued that Levinas’ ethics is particularly valuable in problematising the relationship between politics and ethics. Levinas provides a critique of modernity’s emphasis on politics and the egocentric self. It is from a Levinasian concern with the Other and the primacy of the ethical that a general rethinking of the relationship between politics, ethics and justice in relation to information and communication technologies can be invoked. (shrink)

The positive qualities of the Internet--anonymity, openness, and reproducibility have added a new ethical dimension to the privacy debate. This paper describes a new and significant way in which privacy is violated. A type of personal information, called virtual information is described and the effectiveness of techniques to protect this type of information is examined. This examination includes a discussion of technical approaches and professional standards as ways to address this violation of virtual information.

As research into identity in the information society gets into its stride, with contributions from many scholarly disciplines such as technology, social sciences, the humanities and the law, a moment of intellectual stocktaking seems appropriate. This article seeks to provide a roadmap of research currently undertaken in the field of identity and identity management showing how the area is developing and how disparate contributions relate to each other. Five different perspectives are proposed through which work in the identity field (...) can be seen: tensions, themes, application areas, research focus and disciplinary approaches and taken together they provide a comprehensive overview of the intellectual territory currently being tilled by academia on this subject. This attempt at a coherent overview is offered in the spirit of debate and discussion, and the authors invite criticism, development and improvement. Another purpose of this paper is to provide an introduction to the range and type of research that the new journal Identity in the Information Society will publish, giving researchers working in the field a clearer idea of the scope of multidisciplinary study that is envisaged. (shrink)

This paper investigates whether the model of local rhetorical coherence suggested in Knott et al. (2001) can boost the performance of the Centering-based metrics of entity coherence employed by Karamanis et al. (2004) for the task of information ordering. Rhetorical coherence is integrated into the way Centering’s basic data structures are derived from the annotated features of the GNOME corpus. The results indicate that (a) the simplest metric continues to perform better than its competitors even when local rhetorical coherence (...) is taken into account, and (b) this extra coherence constraint decreases its performance. (shrink)

The potential contributions information and communication technology (ICT) can make to advancing human capabilities are acknowledged by both the capability approach (CA) and ICT communities. However, there is a lack of genuine engagement between the two communities. This paper addresses the question: How can a collaborative dialogue between the CA and ICT communities be advanced? A prerequisite to exploring collaboratively the potential use of particular technologies with specific capabilities is a conceptual framework within which a dialogue can be undertaken (...) to advance the operationalization of capabilities through the use of ICT. A communicative connection constituted of a dialogic space consisting of the CA and ICT communities and a set of normative values and objectives is proposed. The normative values of the communicative connection are derived from the human right to communicate (RTC) which serves as axiomatic principle of the communicative connection. The shared objectives are to operationalize through the use of ICT both the capability and the right to communicate, which are distinct but present in and reinforce each other. Alternative concepts of communication and freedom of expression to those held by the two communities is presented along with a comparison of the values embodied in the RTC and found in the CA. (shrink)

The theories of information ethics articulated by Luciano Floridi and his collaborators have clear implications for law. Information law, including the law of privacy and of intellectual property, is especially likely to benefit from a coherent and comprehensive theory of information ethics. This article illustrates how information ethics might apply to legal doctrine, by examining legal questions related to the ownership and control of the personal data representations, including photographs, game avatars, and consumer profiles, that have (...) become ubiquitous with the proliferation of information and communication technologies. Recent controversy over the control of player performance statistics in “fantasy” sports leagues provides a limiting case for the analysis. Such data representations will in many instances constitute the kind of personal data that information ethics asserts constitutes an information entity. Legal doctrine in some instances proves sympathetic to such an assertion, but remains largely inchoate as to which data might constitute a given information entity in a given instance. Neither is information ethics, in its current state of development, entirely helpful in answering this critical question. While information ethics holds some promise to bring coherence to this area of the law, further work articulating a richer theory of information ethics will be necessary before it can do so. (shrink)

We compare the elementary theories of Shannon information and Kolmogorov complexity, the extent to which they have a common purpose, and wherethey are fundamentally different. We discuss and relate the basicnotions of both theories: Shannon entropy, Kolmogorov complexity, Shannon mutual informationand Kolmogorov (``algorithmic'') mutual information. We explainhow universal coding may be viewed as a middle ground betweenthe two theories. We consider Shannon's rate distortion theory, whichquantifies useful (in a certain sense) information.We use the communication of information (...) as our guiding motif, and we explain howit relates to sequential question-answer sessions. (shrink)

In this article we argue that discourse structure constrains the set ofpossible constituents in a discourse that can provide the relevantcontext for structuring information in a target sentence, whileinformation structure critically constrains discourse structureambiguity. For the speaker, the discourse structure provides a set of possible contexts for continuation while information structure assignment is independent of discourse structure. For the hearer, the information structure of a sentence together with discourse structure instructs dynamic semantics how rhematic information should (...) be used to update the meaning representation of the discourse. (shrink)

While information structure has traditionally been viewed as a singlepartition of information within an utterance, there are opposing viewsthat identify multiple such partitions in an utterance. The existenceof alternative proposals raises questions about the notion ofinformation structure and also its relation to discoursestructure. Exploring various linguistic aspects, this paper supports thetraditional view by arguing that there is no information structure partition within a subordinate clause.

Much legislation dealing with the uses of genetic information could be criticised for exceptionalising genetic information over other types of information personal to the individual. This paper contends that genetic exceptionalism clouds the issues, and precludes any real debate about the appropriate uses of genetic information. An alternative to “genetically exceptionalist” legislation is to “legislate for fairness”. This paper explores the “legislating for fairness” approach, and concludes that it demonstrates a fundamental misunderstanding of both how legislation (...) is drafted, and how it is interpreted. The uncomfortable conclusion is this: policy-makers and legislators must tackle head-on the difficult policy questions concerning what should and should not be done with genetic information. Only by confronting this crucial issue will they achieve a workable legislative solution to the problems caused by genetic information. (shrink)

When seeking to coordinate in a game with imperfect information, it is often relevant for a player to know what other players know. Keeping track of the information acquired in a play of infinite duration may, however, lead to infinite hierarchies of higher-order knowledge. We present a construction that makes explicit which higher-order knowledge is relevant in a game and allows us to describe a class of games that admit coordinated winning strategies with finite memory.