This paper investigates the view that digital hypercomputing is a good reason for rejection or re-interpretation of the Church-Turing thesis. After suggestion that such re-interpretation is historically problematic and often involves attack on a straw man (the ‘maximality thesis’), it discusses proposals for digital hypercomputing with Zeno-machines , i.e. computing machines that compute an infinite number of computing steps in finite time, thus performing supertasks. It argues that effective computing with Zeno-machines falls into a dilemma: either they are (...) specified such that they do not have output states, or they are specified such that they do have output states, but involve contradiction. Repairs though non-effective methods or special rules for semi-decidable problems are sought, but not found. The paper concludes that hypercomputing supertasks are impossible in the actual world and thus no reason for rejection of the Church-Turing thesis in its traditional interpretation. (shrink)

In this short paper I will introduce an idea which, I will argue, presents a fundamental additional challenge to the machine consciousness community. The idea takes the questions surrounding phenomenology, qualia and phenomenality one step further into the realm of intersubjectivity but with a twist, and the twist is this: that an agent’s intersubjective experience is deeply felt and necessarily co-affective; it is enkinaesthetic, and only through enkinaesthetic awareness can we establish the affective enfolding which enables first the perturbation, (...) and then the balance and counter-balance, the attunement and co-ordination of whole-body interaction through reciprocal adaptation. (shrink)

That the successful development of fully autonomous artificial moral agents (AMAs) is imminent is becoming the received view within artificial intelligence research and robotics. The discipline of Machines Ethics, whose mandate is to create such ethical robots, is consequently gaining momentum. Although it is often asked whether a given moral framework can be implemented into machines, it is never asked whether it should be. This paper articulates a pressing challenge for Machine Ethics: To identify an ethical framework that is (...) both implementable into machines and whose tenets permit the creation of such AMAs in the first place. Without consistency between ethics and engineering, the resulting AMAs would not be genuine ethical robots, and hence the discipline of Machine Ethics would be a failure in this regard. Here this challenge is articulated through a critical analysis of the development of Kantian AMAs, as one of the leading contenders for being the ethic that can be implemented into machines. In the end, however, the development of Kantian artificial moral machines is found to be anti-Kantian. The upshot of all this is that machine ethicists need to look elsewhere for an ethic to implement into their machines. (shrink)

This paper is a summary and evaluation of work presented at the AAAI 2005 Fall Symposium on Machine Ethics that brought together participants from the fields of Computer Science and Philosophy to the end of clarifying the nature of this newly emerging field and discussing different approaches one could take towards realizing the ultimate goal of creating an ethical machine.

Gödel's Theorem is often used in arguments against machine intelligence, suggesting humans are not bound by the rules of any formal system. However, Gödelian arguments can be used to support AI, provided we extend our notion of computation to include devices incorporating random number generators. A complete description scheme can be given for integer functions, by which nonalgorithmic functions are shown to be partly random. Not being restricted to algorithms can be accounted for by the availability of an arbitrary (...) random function. Humans, then, might not be rule-bound, but Gödelian arguments also suggest how the relevant sort of nonalgorithmicity may be trivially made available to machines. (shrink)

The problem of valid induction could be stated as follows: are we justified in accepting a given hypothesis on the basis of observations that frequently confirm it? The present paper argues that this question is relevant for the understanding of Machine Learning, but insufficient. Recent research in inductive reasoning has prompted another, more fundamental question: there is not just one given rule to be tested, there are a large number of possible rules, and many of these are somehow confirmed (...) by the data — how are we to restrict the space of inductive hypotheses and choose effectively some rules that will probably perform well on future examples? We analyze if and how this problem is approached in standard accounts of induction and show the difficulties that are present. Finally, we suggest that the explanation-based learning approach and related methods of knowledge intensive induction could be, if not a solution, at least a tool for solving some of these problems. (shrink)

I consider three aspects in which machine learning and philosophy of science can illuminate each other: methodology, inductive simplicity and theoretical terms. I examine the relations between the two subjects and conclude by claiming these relations to be very close.

Animals, including humans, are usually judged on what they could become, rather than what they are. Many physical and cognitive abilities in the ‘animal kingdom’ are only acquired (to a given degree) when the subject reaches a certain stage of development, which can be accelerated or spoilt depending on how the environment, training or education is. The term ‘potential ability’ usually refers to how quick and likely the process of attaining the ability is. In principle, things should not be different (...) for the ‘machine kingdom’. While machines can be characterised by a set of cognitive abilities, and measuring them is already a big challenge, known as ‘universal psychometrics’, a more informative, and yet more challenging, goal would be to also determine the potential cognitive abilities of a machine. In this paper we investigate the notion of potential cognitive ability for machines, focussing especially on universality and intelligence. We consider several machine characterisations (non-interactive and interactive) and give definitions for each case, considering permanent and temporal potentials. From these definitions, we analyse the relation between some potential abilities, we bring out the dependency on the environment distribution and we suggest some ideas about how potential abilities can be measured. Finally, we also analyse the potential of environments at different levels and briefly discuss whether machines should be designed to be intelligent or potentially intelligent. (shrink)

Learning general concepts in imperfect environments is difficult since training instances often include noisy data, inconclusive data, incomplete data, unknown attributes, unknown attribute values and other barriers to effective learning. It is well known that people can learn effectively in imperfect environments, and can manage to process very large amounts of data. Imitating human learning behavior therefore provides a useful model for machine learning in real-world applications. This paper proposes a new, more effective way to represent imperfect training instances (...) and rules, and based on the new representation, a Human-Like Learning (HULL) algorithm for incrementally learning concepts well in imperfect training environments. Several examples are given to make the algorithm clearer. Finally, experimental results are presented that show the proposed learning algorithm works well in imperfect learning environments. (shrink)

John Searle distinguished between weak and strong artificial intelligence (AI). This essay discusses a third alternative, mild AI, according to which a machine may be capable of possessing a species of mentality. Using James Fetzer's conception of minds as semiotic systems, the possibility of what might be called ``mild AI'' receives consideration. Fetzer argues against strong AI by contending that digital machines lack the ground relationship required of semiotic systems. In this essay, the implementational nature of semiotic processes posited (...) by Charles S. Peirce's triadic sign relation is re-examined in terms of the underlying dispositional processes and the ontological levels they would span in an inanimate machine. This suggests that, if non-human mentality can be replicated rather than merely simulated in a digital machine, the direction to pursue appears to be that of mild AI. (shrink)

The resolution of ambiguities is one of the central problems for Machine Translation. In this paper we propose a knowledge-based approach to disambiguation which uses Description Logics (dl) as representation formalism. We present the process of anaphora resolution implemented in the Machine Translation systemfast and show how thedl systemback is used to support disambiguation.The disambiguation strategy uses factors representing syntactic, semantic, and conceptual constraints with different weights to choose the most adequate antecedent candidate. We show how these factors (...) can be declaratively represented as defaults inback. Disambiguation is then achieved by determining the interpretation that yields a qualitatively minimal number of exceptions to the defaults, and can thus be formalized as exception minimization. (shrink)

Cybernetics promoted machine-supported investigations of adaptive sensorimotor behaviours observed in biological systems. This methodological approach receives renewed attention in contemporary robotics, cognitive ethology, and the cognitive neurosciences. Its distinctive features concern machine experiments, and their role in testing behavioural models and explanations flowing from them. Cybernetic explanations of behavioural events, regularities, and capacities rely on multiply realizable mechanism schemata, and strike a sensible balance between causal and unifying constraints. The multiple realizability of cybernetic mechanism schemata paves the way (...) to principled comparisons between biological systems and machines. Various methodological issues involved in the transition from mechanism schemata to their machine instantiations are addressed here, by reference to a simple sensorimotor coordination task. These concern the proper treatment of ceteris paribus clauses in experimental settings, the significance of running experiments with correct but incomplete machine instantiations of mechanism schemata, and the advantage of operating with real machines ??? as opposed to simulated ones ??? immersed in real environments. (shrink)

Robert Nozick's experience machine thought experiment is often considered a decisive refutation of hedonism. I argue that the conclusions we draw from Nozick's thought experiment ought to be informed by considerations concerning the operation of our intuitions about value. First, I argue that, in order to show that practical hedonistic reasons are not causing our negative reaction to the experience machine, we must not merely stipulate their irrelevance (since our intuitions are not always responsive to stipulation) but fill (...) in the concrete details that would make them irrelevant. If we do this, we may see our feelings about the experience machine becoming less negative. Second, I argue that, even if our feelings about the experience machine do not perfectly track hedonistic reasons, there are various reasons to doubt the reliability of our anti-hedonistic intuitions. And finally, I argue that, since in the actual world seeing certain things besides pleasure as ends in themselves may best serve hedonistic ends, hedonism may justify our taking these other things to be intrinsically valuable, thus again making the existence of our seemingly anti-hedonistic intuitions far from straightforward evidence for the falsity of hedonism. (shrink)

It is argued that Nozick's experience machine thought experiment does not pose a particular difficulty for mental state theories of well-being. While the example shows that we value many things beyond our mental states, this simply reflects the fact that we value more than our own well-being. Nor is a mental state theorist forced to make the dubious claim that we maintain these other values simply as a means to desirable mental states. Valuing more than our mental states is (...) compatible with maintaining that the impact of such values upon our well-being lies in their impact upon our mental lives. (shrink)

On the 27th of October, 1949, the Department of Philosophy at the University of Manchester organized a symposium "Mind and Machine", as Michael Polanyi noted in his Personal Knowledge (1974, p. 261). This event is known, especially among scholars of Alan Turing, but it is scarcely documented. Wolfe Mays (2000) reported about the debate, which he personally had attended, and paraphrased a mimeographed document that is preserved at the Manchester University archive. He forwarded a copy to Andrew Hodges and (...) B. Jack Copeland, who in then published it on their respective websites. The basis of this interpretation here is the copy preserved in the Regenstein Library of the University of Chicago, Special Collections, Polanyi Collection (abbreviated RPC, box 22, folder 19). The same collection holds the mimeographed statement that Polanyi prepared for this symposium: "Can the mind be represented by a machine?" This text has not been studied by Polanyi scholars. (shrink)

Measurement is said to be the basis of exact sciences as the process of assigning numbers to matter (things or their attributes), thus making it possible to apply the mathematically formulated laws of nature to the empirical world. Mathematics and empiria are best accorded to each other in laboratory experiments which function as what Nancy Cartwright calls nomological machine: an arrangement generating (mathematical) regularities. On the basis of accounts of measurement errors and uncertainties, I will argue for two claims: (...) 1) Both fundamental laws of physics, corresponding to ideal nomological machine, and phenomenological laws, corresponding to material nomological machine, lie, being highly idealised relative to the empirical reality; and also laboratory measurement data do not describe properties inherent to the world independently of human understanding of it. 2) Therefore the naive, representational view of measurement and experimentation should be replaced with a more pragmatic or practice-based view. (shrink)

Most philosophers appear to have ignored the distinction between the broad concept of Virtual Machine Functionalism (VMF) described in Sloman&Chrisley (2003) and the better known version of functionalism referred to there as Atomic State Functionalism (ASF), which is often given as an explanation of what Functionalism is, e.g. in Block (1995). -/- One of the main differences is that ASF encourages talk of supervenience of states and properties, whereas VMF requires supervenience of machines that are arbitrarily complex networks of (...) causally interacting (virtual, but real) processes, possibly operating on different time-scales, examples of which include many different procesess usually running concurrently on a modern computer performing various tasks concerned with handling interfaces to physical devices, managing the file system, dealing with security, providing tools, entertainments, and games, and possibly processing research data. Another example of VMF would be the kind of functionalism involved in a large collection of possibly changing socio-economic structures and processes interacting in a complex community, and yet another is illustrated by the kind of virtual machinery involved in the many levels of visual processing of information about spatial structures, processes, and relationships (including percepts of moving shadows, reflections, highlights, optical-flow patterns and changing affordances) as you walk through a crowded car-park on a sunny day: generating a whole zoo of interacting qualia. (Forget solitary red patches, or experiences thereof.) -/- Perhaps VMF should be re-labelled "Virtual MachinERY Functionalism" because the word 'machinery' more readily suggests something complex with interacting parts. VMF is concerned with virtual machines that are made up of interacting concurrently active (but not necessarily synchronised) chunks of virtual machinery which not only interact with one another and with their physical substrates (which may be partly shared, and also frequently modified by garbage collection, metabolism, or whatever) but can also concurrently interact with and refer to various things in the immediate and remote environment (via sensory/motor channels, and possible future technologies also). I.e. virtual machinery can include mechanisms that create and manipulate semantic content, not only syntactic structures or bit patterns as digital virtual machines do. -/- Please note: Click on the title above or the link below to read the paper. I prefer to keep all my papers freely accessible on my web site so that I can correct mistakes and add improvements. -/- http://www.cs.bham.ac.uk/research/projects/cogaff/misc/vm-functionalism.html -/- This is now part of the Meta-Morphogenesis project: http://www.cs.bham.ac.uk/research/projects/cogaff/misc/meta-morphogenesis.html. (shrink)

In Physics VI.9 Aristotle addresses Zeno's four paradoxes of motion and amongst them the arrow paradox. In his brief remarks on the paradox, Aristotle suggests what he takes to be a solution to the paradox.In two famous papers, both called 'A note on Zeno's arrow', Gregory Vlastos and Jonathan Lear each suggest an interpretation of Aristotle's proposed solution to the arrow paradox. In this paper, I argue that these two interpretations are unsatisfactory, and suggest an alternative interpretation. In (...) particular, I claim that what seems on the face of it to be Aristotle's solution to the paradox raises two puzzles to which my interpretation, as opposed to Lear's and Vlastos's, provides an adequate response. (shrink)

We describe an emerging field, that of nonclassical computability and nonclassical computing machinery. According to the nonclassicist, the set of well-defined computations is not exhausted by the computations that can be carried out by a Turing machine. We provide an overview of the field and a philosophical defence of its foundations.

Brain-machine interfaces are a growing field of research and application. The increasing possibilities to connect the human brain to electronic devices and computer software can be put to use in medicine, the military, and entertainment. Concrete technologies include cochlear implants, Deep Brain Stimulation, neurofeedback and neuroprosthesis. The expectations for the near and further future are high, though it is difficult to separate hope from hype. The focus in this paper is on the effects that these new technologies may have (...) on our ‘symbolic order’—on the ways in which popular categories and concepts may change or be reinterpreted. First, the blurring distinction between man and machine and the idea of the cyborg are discussed. It is argued that the morally relevant difference is that between persons and non-persons, which does not necessarily coincide with the distinction between man and machine. The concept of the person remains useful. It may, however, become more difficult to assess the limits of the human body. Next, the distinction between body and mind is discussed. The mind is increasingly seen as a function of the brain, and thus understood in bodily and mechanical terms. This raises questions concerning concepts of free will and moral responsibility that may have far reaching consequences in the field of law, where some have argued for a revision of our criminal justice system, from retributivist to consequentialist. Even without such a (unlikely and unwarranted) revision occurring, brain-machine interactions raise many interesting questions regarding distribution and attribution of responsibility. (shrink)

This article criticises one of Stuart Rachels' and Larry Temkin's arguments against the transitivity of 'better than'. This argument invokes our intuitions about our preferences of different bundles of pleasurable or painful experiences of varying intensity and duration, which, it is argued, will typically be intransitive. This article defends the transitivity of 'better than' by showing that Rachels and Temkin are mistaken to suppose that preferences satisfying their assumptions must be intransitive. It makes cler where the argument goes wrong by (...) showing that it is a version of Zeno's paradox of Achilles and the Tortoise. (shrink)

Can we test philosophical thought experiments, such as whether people would enter an experience machine or would leave one once they are inside? Dan Weijers argues that since 'rational' subjects (e.g. students taking surveys in college classes) are believable, we can do so. By contrast, I argue that because such subjects will probably have the wrong affect (i.e. emotional states) when they are tested, such tests are almost worthless. Moreover, understood as a general policy, such pretend testing would ruin (...) the results of most psychological tests, such as those of helping behavior, attitudes to authority, moral transgressions, etc. However, I also argue that certain philosophical thought experiments do not require us to have as much (or any) affect to understand them, or to elicit intuitions, and so can be tested. Generally, experimental philosophy must adhere to this limit, on pain of offering vacuous results. (shrink)

Various arguments have been put forward to show that Zeno-like paradoxes are still with us. A particularly interesting one involves a cube composed of colored slabs that geometrically decrease in thickness. We first point out that this argument has already been nullified by Paul Benacerraf. Then we show that nevertheless a further problem remains, one that withstands Benacerraf s critique. We explain that the new problem is isomorphic to two other Zeno-like predicaments: a problem described by Alper and (...) Bridger in 1998 and a modified version of the problem that Benardete introduced in 1964. Finally, we present a solution to the three isomorphic problems. (shrink)

Instead of using low-level neurophysiology mimicking and exploratory programming methods commonly used in the machine consciousness field, the hierarchical Operational Architectonics (OA) framework of brain and mind functioning proposes an alternative conceptual-theoretical framework as a new direction in the area of model-driven machine (robot) consciousness engineering. The unified brain-mind theoretical OA model explicitly captures (though in an informal way) the basic essence of brain functional architecture, which indeed constitutes a theory of consciousness. The OA describes the neurophysiological basis (...) of the phenomenal level of brain organization. In this context the problem of producing man-made “machine” consciousness and “artificial” thought is a matter of duplicating all levels of the operational architectonics hierarchy (with its inherent rules and mechanisms) found in the brain electromagnetic field. We hope that the conceptual-theoretical framework described in this paper will stimulate the interest of mathematicians and/or computer scientists to abstract and formalize principles of hierarchy of brain operations which are the building blocks for phenomenal consciousness and thought. (shrink)

Brain machine interface (BMI) technology makes direct communication between the brain and a machine possible by means of electrodes. This paper reviews the existing and emerging technologies in this field and offers a systematic inquiry into the relevant ethical problems that are likely to emerge in the following decades.

Abstract Philosophical discussion of Alan Turing’s writings on intelligence has mostly revolved around a single point made in a paper published in the journal Mind in 1950. This is unfortunate, for Turing’s reflections on machine (artificial) intelligence, human intelligence, and the relation between them were more extensive and sophisticated. They are seen to be extremely well-considered and sound in retrospect. Recently, IBM developed a question-answering computer (Watson) that could compete against humans on the game show Jeopardy! There are hopes (...) it can be adapted to other contexts besides that game show, in the role of a collaborator of, rather than a competitor to, humans. Another, different, research project --- an artificial intelligence program put into operation in 2010 --- is the machine learning program NELL (Never Ending Language Learning), which continuously ‘learns’ by ‘reading’ massive amounts of material on millions of web pages. Both of these recent endeavors in artificial intelligence rely to some extent on the integration of human guidance and feedback at various points in the machine’s learning process. In this paper, I examine Turing’s remarks on the development of intelligence used in various kinds of search, in light of the experience gained to date on these projects. (shrink)

Aaron Sloman remarks that a lot of present disputes on consciousness are usually based, on the one hand, on re-inventing “ideas that have been previously discussed at lenght by others”, on the other hand, on debating “unresolvable” issues, such as that about which animals have phenomenal consciousness. For what it’s worth I would make a couple of examples, which are related to certain topics that Sloman deals with in his paper, and that might be useful for introducing some comments in (...) the following of this brief note. (shrink)

Earlier, we have studied computations possible by physical systems and by algorithms combined with physical systems. In particular, we have analysed the idea of using an experiment as an oracle to an abstract computational device, such as the Turing machine. The theory of composite machines of this kind can be used to understand (a) a Turing machine receiving extra computational power from a physical process, or (b) an experimenter modelled as a Turing machine performing a test of (...) a known physical theory T. Our earlier work was based upon experiments in Newtonian mechanics. Here we extend the scope of the theory of experimental oracles beyond Newtonian mechanics to electrical theory. First, we specify an experiment that measures resistance using a Wheatstone bridge and start to classify the computational power of this experimental oracle using non-uniform complexity classes. Secondly, we show that modelling an experimenter and experimental procedure algorithmically imposes a limit on our ability to measure resistance by the Wheatstone bridge. The connection between the algorithm and physical test is mediated by a protocol controlling each query, especially the physical time taken by the experimenter. In our studies we find that physical experiments have an exponential time protocol, this we formulate as a general conjecture. Our theory proposes that measurability in Physics is subject to laws which are co-lateral effects of the limits of computability and computational complexity. (shrink)

A is for Alice and astronomers arguing about acceleration -- B is for Bernard's body-exchange machine -- C is for the Catholic cannibal -- D is for Maxwell's demon -- E is for evolution (and an embarrassing problem with it) -- F is for the forms lost forever to the prisoners of the cave -- G is for Galileo's gravitational balls -- H is for Hume's shades -- I is for the identity of indiscernibles -- J is for Henri (...) Poincaré and alternative geometries -- K is for the Kritik and Kant's kind of thought experiments -- L is for Lucretius' spear -- M is for Mach's motionless chain -- N is for Newton's bucket -- O is for Olbers' paradox -- P is for Parfit's person -- Q is for the questions raised by thought experiments quotidiennes -- R is for the rule-ruled room -- S is for Salvatius' ship, sailing along its own space-time line -- T is for the time-travelling twins -- U is for the universe, and Einstein's attempts to understand it -- V is for the vexed case of the violinist -- W is for Wittgenstein's beetle -- X is for xenophanes and thinking by examples -- Y is for counterfactuals and a backwards approach to history -- Z is for Zeno and the mysteries of infinity. (shrink)

Human and machine discovery are gradual problem-solving processes of searching large problem spaces for incompletely defined goal objects. Research on problem solving has usually focused on search of an instance space (empirical exploration) and a hypothesis space (generation of theories). In scientific discovery, search must often extend to other spaces as well: spaces of possible problems, of new or improved scientific instruments, of new problem representations, of new concepts, and others. This paper focuses especially on the processes for finding (...) new problem representations and new concepts, which are relatively new domains for research on discovery.Scientific discovery has usually been studied as an activity of individual investigators, but these individuals are positioned in a larger social structure of science, being linked by the blackboard of open publication (as well as by direct collaboration). Even while an investigator is working alone, the process is strongly influenced by knowledge and skills stored in memory as a result of previous social interaction. In this sense, all research on discovery, including the investigations on individual processes discussed in this paper, is social psychology, or even sociology. (shrink)

John Searle has argued that one can imagine embodying a machine running any computer program without understanding the symbols, and hence that purely computational processes do not yield understanding. The disagreement this argument has generated stems, I hold, from ambiguity in talk of 'understanding'. The concept is analysed as a relation between subjects and symbols having two components: a formal and an intentional. The central question, then becomes whether a machine could possess the intentional component with or without (...) the formal component. I argue that the intentional state of a symbol's being meaningful to a subject is a functionally definable relation between the symbol and certain past and present states of the subject, and that a machine could bear this relation to a symbol. I sketch a machine which could be said to possess, in primitive form, the intentional component of understanding. Even if the machine, in lacking consciousness, lacks full understanding, it contributes to a theory of understanding and constitutes a counterexample to the Chinese Room argument. (shrink)

Are trust relationships involving humans and artificial agents possible? This controversial question has become a hotly debated topic in the emerging field of machine ethics. Employing a model of trust advanced by Buechner and Tavani :39–51, 2011), I argue that the “short answer” to this question is yes. However, I also argue that a more complete and nuanced answer will require us to articulate the various levels of trust that are also possible in environments comprising both human agents and (...) AAs. In defending this view, I show how James Moor’s model for distinguishing four levels of ethical agents in the context of machine ethics :18–21, 2006) can help us to develop a framework that differentiates four levels of trust. Via a series of hypothetical scenarios, I illustrate each level of trust involved in HA–AA relationships. Finally, I argue that these levels of trust reflect three key factors or variables: the level of autonomy of the individual AAs involved, the degree of risk/vulnerability on the part of the HAs who place their trust in the AAs, and the kind of interactions that occur between the HAs and AAs in the trust environments. (shrink)

The five participants in this dialogue critically discuss Zeno of Elea's paradox of Achilles and the tortoise. They consider a number of solutions to and restatements of the paradox, together with their philosophical implications. Among the issues investigated include the appearance-reality distinction, Aristotle's distinction between actual and potential infinity, the concept of a continuum, Cantor's continuum hypothesis and theory of transfinite ordinals, and, as a solution to Zeno's puzzle, the distinction between infinite and indeterminate or inexhaustible divisibility.

Analogies to machines are commonplace in the life sciences, especially in cellular and molecular biology — they shape conceptions of phenomena and expectations about how they are to be explained. This paper offers a framework for thinking about such analogies. The guiding idea is that machine-like systems are especially amenable to decompositional explanation, i.e., to analyses that tease apart underlying components and attend to their structural features and interrelations. I argue that for decomposition to succeed a system must exhibit (...) causal orderliness, which I explicate in terms of differentiation among parts and the significance of local relations. I also discuss what makes a model depict its target as machine-like, suggesting that a key issue is the degree of detail with respect to the target’s parts and their interrelations. (shrink)

This paper seeks to understand machine cognition. The nature of machine cognition has been shrouded in incomprehensibility. We have often encountered familiar arguments in cognitive science that human cognition is still faintly understood. This paper will argue that machine cognition is far less understood than even human cognition despite the fact that a lot about computer architecture and computational operations is known. Even if there have been putative claims about the transparency of the notion of machine (...) computations, these claims do not hold out in unraveling machine cognition, let alone machine consciousness (if there is any such thing). The nature and form of machine cognition remains further confused also because of attempts to explain human cognition in terms of computation and to model/simulate (aspects of) human cognitive processing in machines. Given that these problems in characterizing machine cognition persist, a view of machine cognition that aims to avoid these problems is outlined. The argument that is advanced is that something becomes a computation in machines only when a human interprets it, which is a kind of semiotic causation. From this it follows that a computing machine is not engaged in a computation unless a human interprets what it is doing; instead, it is engaged in machine cognition, which is defined as a member or subset of the set of all possible mappings of inputs to outputs. The human interpretation, which is a semiotic process, gives meaning to what a machine does, and then what it does becomes a computation. (shrink)

Herein we make a plea to machine ethicists for the inclusion of constraints on their theories consistent with empirical data on human moral cognition. As philosophers, we clearly lack widely accepted solutions to issues regarding the existence of free will, the nature of persons and firm conditions on moral agency/patienthood; all of which are indispensable concepts to be deployed by any machine able to make moral judgments. No agreement seems forthcoming on these matters, and we don’t hold out (...) hope for machines that can both always do the right thing (on some general ethic) and produce explanations for its behavior that would be understandable to a human confederate. Our tentative solution involves understanding the folk concepts associated with our moral intuitions regarding these matters, and how they might be dependent upon the nature of human cognitive architecture. It is in this spirit that we begin to explore the complexities inherent in human moral judgment via computational theories of the human cognitive architecture, rather than under the extreme constraints imposed by rational-actor models assumed throughout much of the literature on philosophical ethics. After discussing the various advantages and challenges of taking this particular perspective on the development of artificial moral agents, we computationally explore a case study of human intuitions about the self and causal responsibility. We hypothesize that a significant portion of the variance in reported intuitions for this case might be explained by appeal to an interplay between the human ability to mindread and to the way that knowledge is organized conceptually in the cognitive system. In the present paper, we build on a pre-existing computational model of mindreading (Bello et al. 2007) by adding constraints related to psychological distance (Trope and Liberman 2010), a well-established psychological theory of conceptual organization. Our initial results suggest that studies of folk concepts involved in moral intuitions lead us to an enriched understanding of cognitive architecture and a more systematic method for interpreting the data generated by such studies. (shrink)

We present a novel procedure to engage the public in ethical deliberations on the potential impacts of brain machine interface technology. We call this procedure a convergence seminar, a form of scenario-based group discussion that is founded on the idea of hypothetical retrospection. The theoretical background of this procedure and the results of five seminars are presented.

Inventive Machine project is the matter of discussion. The project aims to develop a family of AI systems for intelligent support of all stages of engineering design.Peculiarities of the IM project:deep and comprehensive knowledge base — the theory of inventive problem solving (TIPS)solving complex problems at the level of inventionsapplication in any area of engineeringstructural prediction of engineering system developmentThe systems of the second generation are described in detail.

In this paper we discuss the application of a new machine learning approach – Argument Based Machine Learning – to the legal domain. An experiment using a dataset which has also been used in previous experiments with other learning techniques is described, and comparison with previous experiments made. We also tested this method for its robustness to noise in learning data. Argumentation based machine learning is particularly suited to the legal domain as it makes use of the (...) justifications of decisions which are available. Importantly, where a large number of decided cases are available, it provides a way of identifying which need to be considered. Using this technique, only decisions which will have an influence on the rules being learned are examined. (shrink)

A novel conceptual framework for theoretical psychology is presented and illustrated for the example of bistable perception. A basic formal feature of this framework is the non-commutativity of operations acting on mental states. A corresponding model for the bistable perception of ambiguous stimuli, the Necker–Zeno model, is sketched and some empirical evidence for it so far is described. It is discussed how a temporal non-locality of mental states, predicted by the model, can be understood and tested.

This paper discusses how to refine a given initial legal ontology using an existing MRD (Machine-Readable Dictionary). There are two hard issues in the refinement process. One is to find out those MRD concepts most related to given legal concepts. The other is to correct bugs in a given legal ontology, using the concepts extracted from an MRD. In order to resolve the issues, we present a method to find out the best MRD correspondences to given legal concepts, using (...) two match algorithms. Moreover, another method called a static analysis is given to refine a given legal ontology, based on the comparison between the initial legal ontology and the best MRD correspondences to given legal concepts. We have implemented a software environment to help a user refine a given legal ontology based on these methods. The empirical results have shown that the environment works well in the field of Contracts for the International Sale of Goods. (shrink)

Turing wrote that the “guiding principle” of his investigation into the possibility of intelligent machinery was “The analogy [of machinery that might be made to show intelligent behavior] with the human brain.” [10] In his discussion of the investigations that Turing said were guided by this analogy, however, he employs a more far-reaching analogy: he eventually expands the analogy from the human brain out to “the human community as a whole.” Along the way, he takes note of an obvious fact (...) in the bigger scheme of things regarding human intelligence: grownups were once children; this leads him to imagine what a machine analogue of childhood might be. In this paper, I’ll discuss Turing’s child-machine, what he said about different ways of educating it, and what impact the “bringing up” of a child-machine has on its ability to behave in ways that might be taken for intelligent. I’ll also discuss how some of the various games he suggested humans might play with machines are related to this approach. (shrink)

We demonstrate a hybrid machine learning method to classify schizophrenia patients and healthy controls, using functional magnetic resonance imaging (fMRI) and single nucleotide polymorphism (SNP) data. The method consists of four stages: (1) SNPs with the most discriminating information between the healthy controls and schizophrenia patients are selected to construct a support vector machine ensemble (SNP-SVME). (2) Voxels in the fMRI map contributing to classification are selected to build another SVME (Voxel-SVME). (3) Components of fMRI activation obtained with (...) independent component analysis (ICA) are used to construct a single SVM classifier (ICA-SVMC). (4) The above three models are combined into a single module using a majority voting approach to make a final decision (Combined SNP-fMRI). The method was evaluated by a fully-validated leave-one-out method using 40 subjects (20 patients and 20 controls). The classification accuracy was: 0.74 for SNP-SVME, 0.82 for Voxel-SVME, 0.83 for ICA-SVMC, and 0.87 for Combined SNP-fMRI. Experimental results show that better classification accuracy was achieved by combining genetic and fMRI data than using either alone, indicating that genetic and brain function representing different, but partially complementary aspects, of schizophrenia etiopathology. This study suggests an effective way to reassess biological classification of individuals with schizophrenia, which is also potentially useful for identifying diagnostically important markers for the disorder. (shrink)

The article reports the results from the developmentof four data-driven discovery systems, operating inlinguistics. The first mimics the induction methods ofJohn Stuart Mill, the second performs componentialanalysis of kinship vocabularies, the third is ageneral multi-class discrimination program, and thefourth finds logical patterns in data. These systemsare briefly described and some arguments are offeredin favour of machine linguistic discovery. Thearguments refer to the strength of machines incomputationally complex tasks, the guaranteedconsistency of machine results, the portability ofmachine methods to new (...) tasks and domains, and thepotential machines provide for our gaining newinsights. (shrink)

Examples in the history of Automated Theorem Proving are given, in order to show that even a seemingly ‘mechanical’ activity, such as deductive inference drawing, involves special cultural features and tacit knowledge. Mechanisation of reasoning is thus regarded as a complex undertaking in ‘cultural pruning’ of human-oriented reasoning. Sociological counterparts of this passage from human- to machine-oriented reasoning are discussed, by focusing on problems of man-machine interaction in the area of computer-assisted proof processing.

The Geneva–Brussels approach to quantum mechanics (QM) and the semantic realism (SR) nonstandard interpretation of QM exhibit some common features and some deep conceptual differences. We discuss in this paper two elementary models provided in the two approaches as intuitive supports to general reasonings and as a proof of consistency of general assumptions, and show that Aerts’ quantum machine can be embodied into a macroscopic version of the microscopic SR model, overcoming the seeming incompatibility between the two models. This (...) result provides some hints for the construction of a unified perspective in which the two approaches can be properly placed. (shrink)

This paper presents an analysis of three major contests for machine intelligence. We conclude that a new era for Turing’s test requires a fillip in the guise of a committed sponsor, not unlike DARPA, funders of the successful 2007 Urban Challenge.