So-called ‘social robots’ are embodied artificial agents that move in the physical space of human social interaction and can be perceived as social agents or social interaction partners—due to their appearance, their movements, and their behavioral functionalities. This new type of robot technology attracts and calls for increasing attention since “social robotics” is expected to be the driving factor in the comprehensive automation of all industrial sectors envisaged under the header “industry 4.0” or, more dramatically, as the “robot revolution.” One particularly prevalent reason for philosophers in particular, and Humanities researchers in general, to concern themselves with the prospect of a widespread use of social robots, is the fact that the affordances of social robots also include the capacity to elicit human emotional responses, especially positive emotional appeal and attachment. Research in Human-Robot Interaction Studies (HRI) and Social Robotics has begun to investigate the emotional dimension in our experience of social robots and the relationship between emotional response to and social cognition of these devices. However, the systematic complexity of the conceptual implications of these phenomena and their normative aspects call for a wider scope of interdisciplinary competences.

The contributions collected within this special issue will address the emotional dimension of the human experience of social robots from the perspective of “robophilosophy”, a new area of interdisciplinary research in philosophy (Seibt et al. 2014). Robophilosophy is “philosophy of, for, and by social robotics” but undertaken in closest interdisciplinary contact (mostly: in research collaborations) with all disciplines involved in specific social robotics applications (robotics, cognitive science, psychology, sociology, anthropology, linguistics, education science). The aim of this issue thus is to engage the pressing questions of the emotional dimension of social robotics from a perspective that combines the empirical knowledge with the specific tools and competences that philosophers can bring to questions of conceptual implications and normative assessments.

Can robots have emotions? While this question is doubtless of central significance for the ontology of mind and is currently investigated in artificial intelligence research, the focus of this special issue will not be on the possible realization of emotions as ‘inner’ states with phenomenal qualities, but on the display of emotions and the affordance of emotions. That is, the articles featured in this special issue will investigate either (i) the conceptual and ethical implications of social robots displaying emotions, or (ii) the conceptual and ethical implications of social robots affording emotions, in close interaction with empirical case studies investigating specific applications and the specific affective reactions relevant in the given context. While these two perspectives, (i) robot display of emotion versus (ii) affordance of human emotion are frequently interconnected, the editors submit that a more careful distinction between the two will allow us to develop particularly productive and focused answers to the following core questions:

Is the display of emotions an indispensable element in human social cognition—i.e., can we understand an agent as a ‘social’ agent if no emotions are displayed? And if so, how could we construct a new perceptual category for non-emotional social agents, given the rich network of conceptual implications in which our current notion of sociality is embedded? For example, what would be the legal and moral status of social agents that do not display emotions?

Should we, for ethical reasons, encourage or warn against the production of agents that aim to interact with us on the model of familiar social interactions but do not display emotions?

In view of current research results on the extent to which humans get emotionally engaged with and attached to social robots, what are the conceptual and ethical implications of designing affordances for emotions in robots? Can we conceptually make room for the fact that in human-robot interaction relational predicates (such as ‘friend’ or ‘caretaker’) or even descriptions of behavior (‘is acting friendly’ or ‘is helping’) turn into expressions for response-dependent properties (affords being responded to as friend or caretaker, affords being perceived as acting friendly or helping) which in turn would justify human emotional reactions such as delight, sympathy, or gratitude?

In which contexts might it be ethically permissible or even advisable to design such affordances for human emotional reactions? What are the implications for the legal and moral status of robots if they afford emotional reactions in the humans they interact with and, in particular, also engender ‘moral sentiments’?

In addition, methodological questions will be raised about the direct possible contribution of philosophy to HRI—for example, whether the analytical categories of social ontology or the phenomenological descriptions of human experiences in interactions with robots can usefully be added to the descriptive and analytical tools currently used in HRI.

Papers should not exceed 7,500 words (excluding notes and references), should be prepared for blind review with no identifying references to you or your institution, and should be accompanied by an abstract of no more than 250 words plus 4-6 key words. For detailed instructions consult Techné’s submission guidelines.

Few philosophers of technology enlist Wittgenstein’s work when thinking about technology, and scholars of Wittgenstein pay scant attention to remarks about technology in his work. This double neglect of (aspects of) Wittgenstein’s work is symptomatic of a more general gap between philosophy of language and philosophy of technology. This special issue of Techné: Research in Philosophy of Technology, entitled “Wittgenstein and Philosophy of Technology”, aims to close these gaps with innovative research papers that use Wittgenstein to conceptually develop existing investigations in philosophy of technology and/or to better understand and evaluate technologies in the 21st century.

Questions to be investigated will include, but are in no way limited to, the following:

Is Ludwig Wittgenstein a “forgotten” classical author in the philosophy of technology? Can we read Wittgenstein’s works in a way that renders these works helpful to the philosophy of technology?

Conversely, could current positions and concepts in the philosophy of technology furnish a criticism of Wittgenstein’s thought, a criticism perhaps underdeveloped in or absent from the established reception (positive or critical) of Wittgenstein’s works?

Can Wittgenstein’s late reflections on use and forms of life add to, possibly even recitfy, current understandings of these notions in the philosophy of technology?

What light, if any, does Wittgenstein’s personal engagement with the engineering profession (from his studies in Manchester to his Vienna forays into building technology) shed on his subsequent engagements with philosophy?

What can we learn from Wittgenstein to better understand how we talk to machines and how machines talk to us (e.g. social robots)?

How can we use Wittgenstein to better understand the cultural, social, and political dimensions of contemporary technosciences such as synthetic biology (e.g. usage of the word “life”)?

Does Wittgenstein help us to understand connections between language and technology in the internet of things?

Can a Wittgensteinian approach contribute to addressing the problem of how to communicate specialized disciplinary terminology in transdisciplinary research?

Contributors are invited to critically reflect on these and other issues from various (disciplinary) perspectives and in particular to ponder the two questions (1) of what philosophy of technology can learn from Wittgenstein, and (2) of philosophy of technology and language can be fruitfully linked.

Papers are expected to typically range between 6000 and 8000 words (including notes and references), prepared for blind review with no identifying references to you or your institution and accompanied by an abstract of no more than 300 words plus 4-6 key words. For detailed instructions consult Techné’s submission guidelines.