Artificial intelligence experts are currently divided into “presentist” and “futurist” factions that call for attention to near-term and long-term AI, respectively. This paper argues that the presentist–futurist dispute is not the best focus of attention. Instead, the paper proposes a reconciliation between the two factions based on a mutual interest in AI. The paper further proposes realignment to two new factions: an “intellectualist” faction that seeks to develop AI for intellectual reasons and a “societalist faction” that seeks to develop AI (...) for the benefit of society. The paper argues in favor of societalism and offers three means of concurrently addressing societal impacts from near-term and long-term AI: advancing societalist social norms, thereby increasing the portion of AI researchers who seek to benefit society; technical research on how to make any AI more beneficial to society; and policy to improve the societal benefits of all AI. In practice, it will often be advantageous to emphasize near-term AI due to the greater interest in near-term AI among AI and policy communities alike. However, presentist and futurist societalists alike can benefit from each others’ advocacy for attention to the societal impacts of AI. The reconciliation between the presentist and futurist factions can improve both near-term and long-term societal impacts of AI. (shrink)

Based on the concept of data physicalization, we developed Vital + Morph, an interactive surface for remote connection and awareness of clinical data. It enables users located in remote places to monitor and feel the vital signs measured from a hospitalized person through shape-change. We propose shape-changing interfaces as a way of making data physicalization a richer, intriguing and memorable experience that communicates complex information and insights about data. To demonstrate and validate our proposed concept, we developed an exploratory study (...) about the design and its implications. For evaluating the social impact of shape-changing interfaces in the context of remote monitoring, we presented Vital + Morph in several Media Art festivals. We collected and analyzed the feedback from the visitors during the exhibitions, and discussed the possibilities of the proposed system. A preliminary evaluation shows how shape-changing displays are perceived by users, which establishes not only the potential benefits but also highlights the concerns that several users have raised. Through this study, we aim to contribute to the design of remote monitoring systems by providing a novel approach for displaying clinical data that consider the richness of the physical world. In today’s information-driven society, we should not just focus on how abstract data are collected and analyzed, but also on how it can be presented and incorporated into our daily lives. (shrink)

Brain–machine interfaces are systems that allow the control of a device such as a robot arm through a person’s brain activity; such devices can be used by disabled persons to enhance their life and improve their independence. This paper is an extended version of a work that aims at discriminating between left and right imagined hand movements using a support vector machine classifier to control a robot arm in order to help a person to find an object in the environment. (...) The main focus here is to search for the best features that describe efficiently the electroencephalogram data during such imagined gestures by comparing two feature extraction methods, namely the continuous wavelet transform and the empirical modal decomposition, combined with the principal component analysis that were fed through a linear and radial basis function kernel SVM classifier. The experimental results showed high performance achieving an average accuracy across all the subjects of 92.75% with an RBF kernel SVM classifier using CWT and PCA compared to 80.25% accuracy obtained with EMD and PCA. The proposed system has been implemented and tested using data collected from five male subjects and it enabled the control of the robot arm in the right and the left direction. (shrink)

Assistive Device Art derives from the integration of Assistive Technology and Art, involving the mediation of sensorimotor functions and perception from both, psychophysical methods and conceptual mechanics of sensory embodiment. This paper describes the concept of ADA and its origins by observing the phenomena that surround the aesthetics of prosthesis-related art. It also analyzes one case study, the Echolocation Headphones, relating its provenience and performance to this new conceptual and psychophysical approach of tool design. This ADA tool is designed to (...) aid human echolocation. They facilitate the experience of sonic vision, as a way of reflecting and learning about the construct of our spatial perception. Echolocation Headphones are a pair of opaque goggles which disable the participant’s vision. This device emits a focused sound beam which activates the space with directional acoustic reflection, giving the user the ability to navigate and perceive space through audition. The directional properties of parametric sound provide the participant a focal echo, similar to the focal point of vision. This study analyzes the effectiveness of this wearable sensory extension for aiding auditory spatial location in three experiments; optimal sound type and distance for object location, perceptual resolution by just noticeable difference, and goal-directed spatial navigation for open pathway detection, all conducted at the Virtual Reality Lab of the University of Tsukuba, Japan. The Echolocation Headphones have been designed for a diverse participant base. They have both the potential to aid auditory spatial perception for the visually impaired and to train sighted individuals in gaining human echolocation abilities. Furthermore, this Assistive Device artwork instigates participants to contemplate on the plasticity of their sensorimotor architecture. (shrink)

This paper presents an exercise in the formalization of political principles, by taking as its theme the concept of distributive justice that Karl Marx advanced in his Critique of the Gotha Programme. We first summarize the content of the Critique of the Gotha Programme. Next, we transcribe the core of Marx’s presentation of the concept of distributive justice. Following, we present our formalization of Marx’s conception. Then, we make use of that formal analysis to confront Marx’s principle of distributive justice (...) with John Rawls’ conception of justice as fairness, and the principles of distributive justice that derive from it. Finally, we discuss methodological issues relative to, and implications of, the way of formalizing political principles introduced here. (shrink)

Various potential strategic interactions between a “strong” Artificial intelligence and humans are analyzed using simple 2 × 2 order games, drawing on the New Periodic Table of those games developed by Robinson and Goforth. Strong risk aversion on the part of the human player leads to shutting down the AI research program, but alternative preference orderings by the human and the AI result in Nash equilibria with interesting properties. Some of the AI-Human games have multiple equilibria, and in other cases (...) Pareto-improvement over the Nash equilibrium could be attained if the AI’s behavior towards humans could be guaranteed to be benign. The preferences of a superintelligent AI cannot be known in advance, but speculation is possible as to its ranking of alternative states of the world, and how it might assimilate the accumulated wisdom of humanity. (shrink)

In Homo Deus, Yuval Noah Harari argues that technological advances of the twenty-first century will usher in a significant shift in how humans make important life decisions. Instead of turning to the Bible or the Quran, to the heart or to our therapists, parents, and mentors, people will turn to Big Data recommendation algorithms to make these choices for them. Much as we rely on Spotify to recommend music to us, we will soon rely on algorithms to decide our careers, (...) spouses, and commitments. Harari also predicts that next, the state will take away individuals’ rights to make their own choices about their lives. If Google knows where your children would flourish best in school, why should the state allow a fallible human parent to decide? Liberalism—which, as Harari uses this term, refers to a state of society in which human freedom to choose is respected and championed—will collapse. In this paper, I argue that Harari’s conception of the future implications of recommendation algorithms is deeply flawed, for two reasons. First, users will not rely on algorithms to make decisions for them because they have no reason to trust algorithms, which are developed by companies with their own incentives, such as profit. Second, for most of our life decisions, algorithms will not be able to be developed, because the factors relevant to the decisions we face are unique to our situation. I present an alternative depiction of the future: instead of relying on algorithms to make decisions for us, humans will use algorithms to enhance our decision-making by helping us consider the most relevant choices first and notice information we might not otherwise. Finally, I will also argue that even if computers could make many of our decisions for us, liberalism as a political system would emerge unscathed. (shrink)

The idMirror project consists of a tablet computer, specially equipped with a small mirror and a newly developed android app. The Android application uses face recognition to detect the location of the user’s face in relation to the device and based on this renders a computer graphic at the location of his or her reflection. The goal of the idMirror project setting as a research tool was to make an exploratory study on cultural differences at exhibition venues. For this study, (...) we have analyzed 150 participants’ face images aged between 18 and 75; among them, 50 participants’ face images collected in Europe, 50 participants’ face images collected in the USA, and 50 participants’ face images collected in Japan. We found global exhibitions a good research platform for random sampling of the subjects. For the study, we examined the first 50 participants at each exhibition and we found some significant differences. Through the explanatory study in the form of an art installation, we wanted to learn if the facial expression upon one’s own self-observation is a function of gender and place of exhibition setting origin. (shrink)

Nass’ and Reeves’ media equation paradigm within human–computer interaction challenges long-held assumptions about how users approach computers. Given a rudimentary set of cues present in the system’s design, users are said to unconsciously treat computers as genuine interactants—extending rules of politeness, biases and human interactive conventions to machines. Since the results have wide-ranging implications for HCI research methods, interface design and user experiences, researchers are hard-pressed to experimentally verify the paradigm. This paper focuses on the methodology of attributing the necessary (...) social cues to the agent, a core aspect of the experimental design of studies dealing with the media equation. A typology of experimental anthropomorphisms is developed, allowing an assessment of how the differing axiomatic assumptions affect the relevance of the results for an evaluation of the paradigm. The paper concludes with a series of arguments in favour of one particular anthropomorphism type for researching the media equation. (shrink)

Usually technological innovation and artistic work are seen as very distinctive practices, and innovation of technologies is understood in terms of design and human intention. Moreover, thinking about technological innovation is usually categorized as “technical” and disconnected from thinking about culture and the social. Drawing on work by Dewey, Heidegger, Latour, and Wittgenstein and responding to academic discourses about craft and design, ethics and responsible innovation, transdisciplinarity, and participation, this essay questions these assumptions and examines what kind of knowledge and (...) practices are involved in art and technological innovation. It argues that technological innovation is indeed “technical”, but, if conceptualized as techne, can be understood as art and performance. It is argued that in practice, innovative techne is not only connected to episteme as theoretical knowledge but also has the mode of poiesis: it is not just the outcome of human design and intention but rather involves a performative process in which there is a “dialogue” between form and matter and between creator and environment in which humans and non-humans participate. Moreover, this art is embedded in broader cultural patterns and grammars—ultimately a ‘form of life’—that shape and make possible the innovation. In that sense, there is no gap between science and society—a gap that is often assumed in STS and in, for instance, discourse on responsible innovation. It is concluded that technology and art were only relatively recently and unfortunately divorced, conceptually, but that in practices and performances they were always linked. If we understand technological innovation as a poetic, participative, and performative process, then bringing together technological innovation and artistic practices should not be seen as a marginal or luxury project but instead as one that is central, necessary, and vital for cultural-technological change. This conceptualization supports not only a different approach to innovation but has also social-transformative potential and has implications for ethics of technology and responsible innovation. (shrink)

The current assumptions of knowledge acquisition brought about the crisis in the reproducibility of experiments. A complementary perspective should account for the specific causality characteristic of life by integrating past, present, and future. A “second Cartesian revolution,” informed by and in awareness of anticipatory processes, should result in scientific methods that transcend the theology of determinism and reductionism. In our days, science, itself an expression of anticipatory activity, makes possible alternative understandings of reality and its dynamics. For this purpose, the (...) study advances G-complexity for defining and comparing decidable and undecidable knowledge. AI and related computational expressions of knowledge could benefit from the awareness of what distinguishes the dynamics of life from any other expressions of change. (shrink)

In this paper, I would like to examine Nāgārjuna’s idea of the self and its contemporaneity interpretations in philosophy. As we know, Nāgārjuna examines the emptiness of various things in which the emptiness of the self occupies an important position in the Buddhist philosophical tradition. The main aim of this paper is to understand the meaning of emptiness to explain the nature of the self and to show how it is different from the substantial notion of self. However, Nāgārjuna’s idea (...) of the self is not identical with the contemporary materialistic notion of self. The paper is divided into five sections. In the first section, I would like to explain the nature of emptiness in relation to self and how it is different from the substantial self. The second section will focus on the constituents of the self. The third section will bring out the nature of self. In the fourth section and last section, I would like to bring out Nāgārjuna’s idea of the emptiness of the self with respect to the contemporary debates on the nature of self. (shrink)

Recommender systems are recently developed computer-assisted tools that support social and informational needs of various communities and help users exploit huge amounts of data for making optimal decisions. In this study, we present a new recommender system for assessment and risk prediction in child welfare institutions in Israel. The system exploits a large diachronic repository of manually completed questionnaires on functioning of welfare institutions and proposes two different rule-based computational models. The system accepts users’ requests via a simple graphical interface, (...) calculates the institutions’ profiles according to user preferences, and presents assessment scores, trends and comparative analyses of the corresponding data using assorted visual aids. Based on the analysis, the system offers three different strategies for objective assessment of the institutions’ functioning and risks. Qualitative and quantitative evaluation of the system’s effectiveness and accuracy demonstrates that it substantially improves the assessment process of a welfare institution. Moreover, it provides an effective tool for objective large-scale analysis of the institution’s overall state and trends, which were previously based primarily on the institution supervisors’ subjective judgment and intuition. In addition, the proposed recommender system has great practical and social impact as it may help identify and avert potential problems, malfunctions, flaws, risks and even tragic incidents in child welfare institutions, as well as increase their overall functioning levels. As a result, as a long-term social implication, the system may also help reduce inequality and social gaps in the Israeli society. (shrink)

Looking back on the development of computer technology, particularly in the context of manufacturing, we can distinguish three big waves of technological exuberance with a wave length of roughly 30 years: In the first wave, during the 1950s, mainframe computers at that time were conceptualized as “electronic brains” and envisaged as central control unit of an “automatic factory”. Thirty years later, during the 1980s, knowledge-based systems in computer-integrated manufacturing were adored as the computational core of the “unmanned factory”. Both waves (...) dismally stranded on the contumacies of reality. Nevertheless, again thirty years later, we now experience the departure of the “smart factory” based on networks of “artificially intelligent” multi-agent or “cyber-physical systems”. From the very beginning, these technological exuberances rooted in mistaken metaphors describing the artifacts and, hence, in delusions about the true nature of computer systems. The behaviour of computers is, as computing science teaches us, strictly restrained to executing computable functions by means of algorithms, it thus neither resembles the performance of a brain as part of a complex sensitive living body nor is it in any meaningful sense “knowledgeable” or “intelligent”. When the delusion of being able to implement “smart factories”, despite the countless accomplishment failures before, gains momentum anew, it appears absolutely essential to reflect on underlying misconceptions. (shrink)

Contemporary data practices are inducing a convergent saturation point wherein every human action, reaction, interaction, transaction, thought or desire is quantified, reified, recorded and used. Physical or virtual, all is recorded, known or unknown, seen or unseen, until data permeates every facet of our shared human existence. The implications of this eventuality are potentially so far reaching that the very notion or concept of who we are might be fundamentally altered, resulting in new ontologies of the self in a world (...) of Total Data. This polemic paper reflects on the implications that Total Data has for the ontological self in a range of individual and shared contexts, and considers the potential it has to ultimately be symbiotic or assimilatory. It suggests that the current trajectory for Total Data is more assimilatory than symbiotic, demonstrating more potential to collectively monitor and control people than to emancipate and empower them. In response, it calls for an authentic debate and reassessment of current data practices, and for an urgent reprioritisation of core and enduring human-centred values and symbiosis in technological systems development to emancipate and empower people living in a Total Data world. (shrink)

ITC technologies have come to comprehensively represent images and expectations of the future. Hopes of ongoing progress, economic growth, skill upgrading and possibly also democratisation are attached to new ICTs as well as fears of totalitarian control, alienation, job loss and insecurity. Currently, with the terms "Industry 4.0." and ‘Fourth Industrial Revolution”, public institutions, private institutions, and literature refer to the inchoate transformation of production of goods and services resulting from the application of a new wave of technological innovations: interconnected (...) collaborative robots; machine learning; Artificial Intelligence; 3D printers connected to digital development software; simulation of interconnected machines; integration of the information flow along the value chain; multidirectional communication between manufacturing processes and products. According to the main representations of Industry 4.0. by private and public institutions, its effects are expected to be mainly positive, for what regards productivity, economic opportunities and the future of work. The positive potentials now attributed to the new cycle of innovation evoke and expand those attributed to the previous waves of innovation linked to ITC technologies, and, even before, to the transition from Fordism to Post-Fordism. However, these transformations have so far not achieved any of the promises they raised. Improvements for workers in terms of work conditions, work performance and work relationships cannot be determined by any technical innovation in itself, being technological innovation always socially shaped. (shrink)

The implementation of cyber-physical and similar systems depends on prevailing social and economic conditions. It is here argued that, if the effect of these technologies is to be benign, the current neo-liberal economy must change to a radically more cooperative model. In this paper, economy change means a thorough change to a qualitatively different kind of economy. It is contrasted with economic change, which is the kind of minor change usually considered in mainstream discourse. The importance of language is emphasised, (...) including that of techno-optimism and that of economic conservatism. Problems of injustice, strife, and ecological overload cannot be solved by conventional growth together with technical efficiency gains. Rather, a change is advocated from economics-as-usual to a broader concept, oikonomia, which takes into account all that contributes to a good life, including what cannot be represented quantitatively. Some elements of such a broader economy are discussed. It is argued that the benefits of technology can be enhanced and the ills reduced in such an economy. This is discussed in the case of cyber-physical systems under the headings employment, security, standards and oligopoly, and energy efficiency. The paper concludes that such systems, and similar technological developments, cannot resolve the problems of sustainability within an economy-as-usual model. If, however, there is the will to create a cooperative and sustainable economy, technology can contribute significantly to the resolution of present problems. (shrink)

Fighting crime has historically been a field that drives technological innovation, and it can serve as an example of different governance styles in societies. Predictive policing is one of the recent innovations that covers technical trends such as machine learning, preventive crime fighting strategies, and actual policing in cities. However, it seems that a combination of exaggerated hopes produced by technology evangelists, media hype, and ignorance of the actual problems of the technology may have boosted sales of software that supports (...) policing by predicting offenders and crime areas. In this paper we analyse currently used predictive policing software packages with respect to common problems of data mining, and describe challenges that arise in the context of their socio-technical application. (shrink)

The present work tackles the issue of the effects of digitalisation on employment. This issue has been attracting a growing interest, in particular because of the anxiety generated by the idea that digital technologies could cancel a large number of jobs. Although I agree with argument put forward in opposition to the existence of a causal link between technological innovation and increased productivity at the macroeconomic level, I believe that the novelty and pervasiveness of digital technologies require more in-depth micro-level (...) analysis in order to understand the extent to which new digital technologies are currently employed by leading manufacturing companies and the ways new technologies are affecting employment. The empirical findings show that among the different technologies included under the umbrella of Industry 4.0, mainly robots have received a great deal of attention so far, while the current application and employment impact for other emerging technological opportunities such as 3D printing, Internet of Things, Augmented reality, Big data Analytics have not been studied yet. In relation to the qualitative changes of the labour market, our empirical research confirms that there are new types of skills that will be demanded in the future in manufacturing, in particular in relation to service provision and software development. (shrink)

The dream of the perpetual motion charms us since millennia, the desire of machines substituting men was present already in the imperial China and the classical Rome; the medieval alchemists tried to build automata, automata showed up in the Renaissance princes’ plays. In the Aladdin fable, the sorcerer satisfies on the instant all wishes of the lamp’s owner. In other words, the fiction of omnipotence accompanies humanity from the very beginning. Is God omnipotent? So, why not humanity? Building automatic factories, (...) digital modelling of human work, both makes realistic what looked utopian. It can perhaps be achieved an unmanned production mode, and where machines can produce whatever we can desire, endlessly. Are numbers not from zero to infinity? There is, nonetheless, an obstacle. Human desires are subjective; therefore, from the standpoint of the producers, of the automatic factory’s owner, there is a very difficult problem to go through. How to manage human desires? How to transform the desire itself in an automatic factor of the production? Digital modelling of human work is not enough; the human itself must be modelled. The full control of him/her must be achieved. It means understanding a priori each of his/her desire. It means leading him/her step by step all their lives long. It means, shortly, to transform him/her into automata. The nightmare of a bees’ or ants’ society, the nightmare of losing his/her free will comes closer and looks menacing. It looks like the black clouds of a threatening thunderstorm. (shrink)

The new industrial paradigm Industry 4.0, or smart industry, is at the core of contemporary debates. The public debate on Industry 4.0 typically offers two main perspectives: the technological one and the one about industrial policies. On the contrary, the discussion on the social and organizational effects of the new paradigm is still underdeveloped. The article specifically examines this aspect, and analyzes the change that workers are subject to, along with the work organization, smart digital factories. The study originates from (...) an empirical survey conducted by the author together with a multidisciplinary research group between 2014 and 2015 in some of the largest Italian factories.In particular, the article analyzes the links between digital society, digital culture and Industry 4.0, focusing on the issue of people’s participation in the process of change, within a specific case study from the railway sector.Many elements of the Industry 4.0 paradigm are widespread outside the factory, in society; they are not only technological elements but also cultural. One of the key aspects of the analysis is the question of participation and the “person-centered” culture. The subject is addressed critically by presenting both the RE-personalization processes and the new processes of DE personalization caused by digital automation. (shrink)

With increasing technological improvements, production processes are becoming more and more automated. Nevertheless, full automation is improbable in the medium term since human abilities cannot yet be completely replaced. Therefore, it is likely that so-called hybrid human–robot teams will assume the future production. This raises questions regarding the shaping of future production and the effects it will have on the employees, workstations, and the companies as a whole. The project “Work in the Industry of the Future” addresses the entirely new (...) cooperative relationship between man and technology in the Industry 4.0 and its impact on opportunities for the work force. To derive the requirements and effects of hybrid workplaces, an initial work analysis of existing workplaces with varying levels of technological enhancement will be conducted. Multiple standardized work analysis instruments that vary in method, duration, level of analysis, and recorded characteristics already exist. This paper gives an overview of an assortment of these methods that can be used in production. (shrink)

The Fourth Industrial Revolution has become a global buzz word since the World Economic Forum adopted it as an annual issue in 2016. It is represented by hyper automation and hyper connectivity based on artificial intelligence, big data, robotics, and Internet of things. AI, big data, and robotics can contribute to developing hyper automation that can increase productivity and intensify industrial production. Particularly, robots using AI can make decision by themselves as human being on complicated processes. Along with the hyper (...) automation, the hyper connectivity increases not only at national, but also global level by using information and communication technologies. IoT is the core technology to create the hyper connectivity in Cyber Physical System that connects technology, nature, and human being. Accordingly, a perfect convergence between ICT and manufacturing can be completed in the Fourth Industrial Revolution era and an extremely efficient flexible production system by spreading IoT in CPS will be established. Under such a condition, innovative clusters must play their traditional roles in cradles of technology innovation and commercialization. It must be difficult challenges for innovative clusters to meet their targets and to be adjusted by the changing new environment at the same time. This paper argues how the Fourth Industrial Revolution can change the global production chain and how core technologies function in industries. Furthermore, it focuses on how innovative clusters have to evolve to respond the Fourth Industrial Revolution. Last, but not least it also analyzes whether or not innovative clusters can play their roles as technology innovation hubs in the real world and CPS in the Fourth Industrial Revolution era. (shrink)

The impact of Cyberculture, of digital devices on young people as extensions of the body, can be seen in terms of the decreasing structuring of thoughts and information, increasing impulsivity in perception and action, and the development of more primitive defense mechanisms. These adverse impacts result in the feeling of isolation and devaluation, frustration of present and uncertainty of the future, exteriorization and floating identities, mimetic and adhesive identifications, less cohesion of the self, and decreasing tolerance of the other. This (...) paper focuses on the following themes: Symbiosis versus syncretism: The affirmations of symbiosis. The dilutions of syncretism. Synopsis: Too much syncretism, too little symbiosis. Lack of a deeper co-construction of knowledge, more lasting, and sustainable. Lack of increased more independent personal cognitive deepening. Lack of ability to be alone. Causality and free will: Symbiotic versus syncretic causality. Conclusions: Cyber-selfs—either distributed or not at all? (shrink)

New forms of artificial intelligence on the one hand and the ubiquitous networking of “everything with everything” on the other hand characterize the fourth industrial revolution. This results in a changed understanding of human–machine interaction, in new models for production, in which man and machine together with virtual agents form hybrid teams. The empirical study “Socializing with robots” aims to gain insight especially into conditions of development and processes of hybrid human–machine teams. In the experiment, human–robot actions and interactions were (...) closely observed in a virtual environment. Robots as partners differed in shape and behavior. Participants were instructed to achieve an objective that could only be achieved via close teamwork. This paper unites different aspects from core disciplines of social robotics and psychology contributing to anthropomorphization with the empirical insights of the experiment. It focuses on the psychological effects on anthropomorphization and mechanization, taking the inter- and transdisciplinary field of social robotics as a starting point. (shrink)

This paper presents an introductory overview of the main issues that the digitalisation of industrial enterprises known as Industry 4.0 raises for social sciences. First, it will show that this technological transition—which, however, is unfinished and is seen to be in continuity with the so-called “third industrial revolution”—cannot be interpreted with reference to a deterministic approach. It can be analysed more usefully as a range of decisions affecting the industrial policies of national states, the conception and design of machines, their (...) adoption in production processes and finally their use by operators. Second, certain aspects of Industry 4.0 of special concern in terms of organisation of work processes will be analysed on various scales. Finally, we will explain various hypotheses that social and economic research is developing with regard to the new technologies’ controversial effects on employment. (shrink)

Big cities and growing industrial areas bring high risk of different kinds of pollutions which would implicate to the quality of life of the society. Discovering and monitoring of polluted areas using autonomous mobile robots is nowadays a frequently considered solution concerning both environmental and human safety problems. Being part of a distributed control system, such robots can help to improve the efficiency of the existing conventional pollution prevention systems. On the other hand, during the last decade, wireless sensor networks (...) have provoked the interest of specialists from different areas by imposing a large number of theoretical and practical challenges related to their implementation. This paper attempts to deal with both issues by presenting a possible way for building robotized WSNs that can be suitable for solving different environmental monitoring tasks. The goal is to transform the WSN into an adaptive sensor system with intelligent behavior that can be used to discover and track spots or areas where given monitored environmental parameters violate defined thresholds. The inclusion of robotized agents into the WSN structure can provide additional flexibility with respect to the installation of the network sensors and allow reliable information gathering and transfer. The proposed algorithmic methods have been tested on a pre-built laboratory prototype of a robotized WSN. The robotized agents have been built using iRobot Create mobile platforms that are additionally equipped with a single-chip computers Gumstix Verdex pro TM XL6P with various expansion modules. (shrink)

This simulation study is a fuzzy model-based neural network control method. The basic idea is to consider the application of a special type of neural networks based on radial basis function, which belongs to a class of associative memory neural networks. The novelty of this approach is the use of an RBF neural network controller in a model reference adaptive control architecture, based on a one-step-ahead Takagi–Sugeno fuzzy model. The objective is to control the concentration in a continuous stirred-tank reactor (...) highly non-linear system and to assure its stability by limiting the temperature rise generated from the irreversible exothermic reaction. This contribution will help to reduce environmental impact of chemical waste. (shrink)

At a time when fossil fuel burning, nationalism, ethnic and religious intolerance, and other retrograde steps are being promoted, the prospects for world peace and environmental systems stability may appear dim. Exactly because of this is it the more important to continue to examine the sources of conflict. A major obstacle to general progress is the currently dominant economic practice and theory, which is here called the economy-as-usual, or economics-as-usual, as appropriate. A special obstacle to constructive change is the language (...) in which economic matters are usually discussed. This language is narrow, conservative, technical and often obscure. The rapid changes in the environment are largely kept in a separate compartment. If, however, the partition is removed, economics-as-usual, with its dependence on growth and its widening inequality, is seen to be unsustainable. Radical economic change, for better or worse, is to be expected. Such change is here called economy change. The change could be for the better if it involved an expansion of the concept of economics itself, along the lines of oikonomia, a modern revival of a classical Greek term for management or household. In such an expanded view, not everything of economic value can be measured. It is argued that economics-as-usual is the source of much strife. Some features are indicated of a less conflictual economy—more just, cooperative and peaceful. These features include a dignified life available to all people as of right, the word ‘wealth’ being reconnected with weal, well and well-being, and ‘work’ being understood as including all useful activity. (shrink)

3D printing or additive manufacturing is a novel method of manufacturing parts directly from digital model using layer-by-layer material build-up approach. This tool-less manufacturing method can produce fully dense metallic parts in short time, with high precision. Features of additive manufacturing like freedom of part design, part complexity, light weighting, part consolidation, and design for function are garnering particular interests in metal additive manufacturing for aerospace, oil and gas, marine, and automobile applications. Powder bed fusion, in which each powder bed (...) layer is selectively fused using energy source like laser, is the most promising additive manufacturing technology that can be used for manufacturing small, low-volume, complex metallic parts. This review presents overview of 3D Printing technologies, materials, applications, advantages, disadvantages, challenges, economics, and applications of 3D metal printing technology, the DMLS process in detail, and also 3D metal printing perspectives in developing countries. (shrink)

The emergence of new technologies such as the Internet of things and the Cloud transforms the way we interact. Whether it be human to human interaction or human to machine interaction, the size of the networks keeps growing. As the networks get more complex nowadays with many interconnected components, it is necessary to develop distributed scalable algorithms so as to minimize the computation required in decision making in such large-scale systems. In this paper, we consider a setup where each agent (...) in the network updates its opinion by relying on its neighbors’ opinions. The information exchange between the agents is assumed to be mutual. The cluster consensus problem is investigated for networks represented by static or time-varying graphs. Joint and integral connectivity conditions are utilized to determine the number of clusters that are formed, as the interactions among the agents evolve over time. Finally, some numerical examples are given to illustrate the theoretical results. (shrink)

Digitalization has become a cornerstone of competitiveness in the industrial arena, especially in the cases of small lot sizes with many variants in the goods produced. Managers of industrial facilities have to handle the complexity that comes along with Industry 4.0 in diverse dimensions to leverage the potentials of digitalization for their sites. This article describes major drivers of this complexity in current industrial automation to outline the environment of today’s challenges for managers of this technical transition—and shows how managed (...) industrial security services can contribute to stabilize the industrial system. An outlook is given to future automation scenarios as well as the major concepts involved in their protection. (shrink)

IoT connects devices, humans, places, and even abstract items like events. Driven by smart sensors, powerful embedded microelectronics, high-speed connectivity and the standards of the internet, IoT is on the brink of disrupting today’s value chains. Big Data, characterized by high volume, high velocity and a high variety of formats, is a result of and also a driving force for IoT. The datafication of business presents completely new opportunities and risks. To hedge the technical risks posed by the interaction between (...) “everything”, IoT requires comprehensive modelling tools. Furthermore, new IT platforms and architectures are necessary to process and store the unprecedented flow of structured and unstructured, repetitive and non-repetitive data in real-time. In the end, only powerful analytic tools are able to extract “sense” from the exponentially growing amount of data and, as a consequence, data science becomes a strategic asset. The era of IoT relies heavily on standards for technologies which guarantee the interoperability of everything. This paper outlines some fundamental standardization activities. Big Data approaches for real-time processing are outlined and tools for analytics are addressed. As consequence, IoT is a evolutionary process whose success in penetrating all dimensions of life heavily depends on close cooperation between standardization organizations, open source communities and IT experts. (shrink)

The aim of this article is to inquire about potential relationship between change of crime rates and change of gross domestic product growth rate, based on historical statistics of Japan. This national-level study used a dataset covering 88 years and 13 attributes. The data were processed with the self-organizing map, separation power checked by our ScatterCounter method, assisted by other clustering methods and statistical methods for obtaining comparable results. The article is an exploratory application of the SOM in research of (...) criminal phenomena through processing of multivariate data. The research confirmed previous findings that SOM was able to cluster efficiently the present data and characterize these different clusters. Other machine learning methods were applied to ensure clusters computed with SOM. The correlations obtained between GDP and other attributes were mostly weak, with a few of them interesting. (shrink)

This paper discusses the correlation balance of transport as a dynamic system and the economic growth of specific regions and countries expressed in gross domestic product. The contemporary transformation processes of the input resources to the desired outcomes need new intelligent approaches based on new information system techniques. These research determinations are specifically focused on achieving the objective of providing with the analyses concerned with giving a more estimated answer to some of the complex questions related to the economic dynamics, (...) new information technologies, sustainable development, and transport. The assumption is that transport can be considered as one of the most influential and vibrant systems of the economy. Taking into consideration that transport system is a dynamic system which is continually changing in time and space, to maintain functioning and competing level, it should be enabled and followed by the high communication and operation technology. From this assumption, a research an interesting question has arisen: do transport contribution to the GDP increases, of the same is decreasing over time? To answer this answer, a research encompassed the mix research methodology combined with qualitative and quantitative research features. The qualitative research conclusions represent the identified and defined factors which determine the relationship between transport and the economy. The quantitative research is oriented in generating information and knowledge based on statistical data which provide with the evidence about the more realistic relationship between transport and the economy. The main goal of this research paper is to provide decision makers, planners, and academic spectrum with a demanding clarification of the applied relationship between the transport system and the economic development of a country or a region in terms of economic growth expressed in GDP, and to make them aware with regard to the implementation of new technology, new information systems, and smart or intelligence approaches within the field. (shrink)

Fuzzy cognitive maps is a system modeling methodology which applies mostly in complex dynamic systems by describing causal relationships that exist between its parameters called concepts. Fuzzy cognitive map theories have been used in many applications but they present several drawbacks and deficiencies. These limitations are addressed and analyzed fuzzy cognitive map theories are readdressed. A new novel approach in modelling fuzzy cognitive maps is proposed to increase the knowledge of the system and overcome some of its limitations. The state (...) space approach is used for the new model to disaggregate the concepts into different categories. The disaggregation of the concepts into state concepts, input concepts and output concepts is mathematically formulated. The proposed method and the new model is used for the calculation of a building’s energy consumption and the management of its load. Simulations are performed as a case study testing the new proposed method. The problem of the high energy consumption of the building sector is studied using the new fuzzy cognitive map model. Discussions of the obtained results along with future research directions are provided. (shrink)

This article offers a critical discussion in the form of debate among experts in the fields of networks, human behaviour, and social analysis about key issues that arguably affect the human nature and society in the digital age. Based on the responses of Nicholas Christakis to an interview given to the authors, some key questions, applications, and limitations regarding the research on digital networks are discussed, together with hot issues related to the nature of digital data and experimentation in contemporary (...) social science. Finally, current and future prospects are presented in relation to data science and society in the light of the Fourth Industrial Revolution. (shrink)

Nowadays, social media analysis systems are feeding on user contributed data, either for beneficial purposes, such as emergency management, or for user profiling and mass surveillance. Here, we carry out a discussion about the power and pitfalls of public accessibility to social media-based systems, with specific regards to the emergency management application EARS. We investigate whether opening such systems to the population at large would further strengthen the link between communities of volunteer citizens, intelligent systems, and decision makers, thus going (...) in the direction of developing more sustainable and resilient societies. Our analysis highlights fundamental challenges and provides interesting insights into a number of research directions with the aim of developing human-centered social media-based systems. (shrink)

Digital storytelling has become a popular method for curating community, organisational, and individual narratives. Since its beginnings over 20 years ago, projects have sprung up across the globe, where authentic voice is found in the narration of lived experiences. Contributing to a Collective Intelligence for the Common Good, the authors of this paper ask how shared stories can bring impetus to community groups to help identify what they seek to change, and how digital storytelling can be effectively implemented in community (...) partnership projects to enable authentic voices to be carried to other stakeholders in society. The Community Digital Storytelling method is introduced as a means for addressing community-of-place issues. There are five stages to this method: preparation, story telling, story digitisation, digital story sense-making, and digital story sharing. Additionally, a Storytelling Cycle of Trust framework is proposed. We identify four trust dimensions as being imperative foundations in implementing community digital media interventions for the common good: legitimacy, authenticity, synergy, and commons. This framework is concerned with increasing the impact that everyday stories can have on society; it is an engine driving prolonged storytelling. From this perspective, we consider the ability to scale up the scope and benefit of stories in civic contexts. To illustrate this framework, we use experiences from the CDST workshop in northern Britain and compare this with a social innovation project in the southern Netherlands. (shrink)

Social service organizations have long used data in their efforts to support people in need for the purposes of advocacy, tracking, and intervention. Increasingly, such organizations are joining forces to provide wrap-around services to clients in order to “move the needle” on intractable social problems. Groups using these strategies, called Collective Impact, develop shared metrics to guide their work, sharing data, finances, infrastructure, and services. A major emphasis of these efforts is on tracking clients and measuring impacts. This study explores (...) a particular type of Collective Impact strategy called Promise Neighborhoods. Based on a federal grant program, these initiatives attempt to close the achievement gap in particular geographic communities. Through an analysis of publicly available documents and information, the study analyzes the ways these strategies enact a collective intelligence for the common good. The analysis focuses specifically on issues surrounding data collection and use, youth agency, leadership and governance, and funding streams. Together, these foci develop a story of an increasingly used “intelligence” with a limited sense of “collective” and a narrow vision of a “common good.” Using this as a platform, the paper explores alternatives that might develop more robust practices around these concepts. (shrink)

This paper explores two reddit communities that supported Bernie Sanders and Donald Trump, respectively, in the run up to the 2016 US Presidential election campaign. Much of the paper is dedicated to explaining how reddit functions, describing the behaviour of the subreddit communities in question and then asking whether these demonstrated collective intelligence. Subreddit communities submit and vote on content, through their votes they make collective decisions about which content will be broadcast to their community. Large subreddit communities that formed (...) rapidly to support a candidate in an election have not previously been observed on reddit—these offered an interesting context for the consideration of whether subreddit communities demonstrate collective intelligence. Voting is a key determinant of what happens on each subreddit and it is conducted anonymously, it is, therefore, not possible to understand the role that every individual plays in the functioning of the subreddit. The behaviour of these subreddit communities can only be understood as a collective of submitting, commenting, voting and moderating participants. Whether these collectives behave intelligently is a matter of how one defines intelligence—but it is clear that they can be effective in pursuing certain ends. These collectives encounter and sometimes oppose each other on reddit. The community of Trump supporters in particular were in conflict with a number of other high-profile communities on the site, and also the platform’s administrators. (shrink)

In this research, an agent-based simulation seeks to discuss the tragedy of the commons, collective intelligence and institutions developed by social groups. The concept of the tragedy of the commons states that you can always expect environmental degradation when many individuals freely exploit a scarce resource of common use. Hardin proposes two alternatives to deal with it: state or privatized administration. However, it is possible another alternative of self-coordination when the social groups are small. That is, the tragedy of the (...) commons could be faced when the synergy within the group—collective intelligence—develops. The focus of this paper is to analyze small groups who promote their coordination to achieve collective goals, aimed at the preservation and perpetuation of scarce natural resources of common use. The results of this research show that the extreme exhaustion of the environment and consequent dissolution of the group that depends on it, is easily observed in many circumstances. However, in small groups, the self-coordination through the development of strong institutions may avoid an eventual tragic outcome of total environment degradation. Additionally, it has been found that social identity influences the behavior of the individuals, determining the social structures. (shrink)

This paper presents a critical reflection on insights into the ongoing endeavours for community engagement by Ayara and MAL; two urban grassroot organisations in Bogota, Colombia, where a long history of internal conflicts has resulted in diverse human right violations. The paper presents examples of the grassroots organisations’ unique methods of engagement that promotes building collective intelligence from the bottom–up through creative collaboration and design processes, leading to rebuilding social fabrics that support the common good for the people of Bogota.

The mobile internet provides new and easier ways for people to organise themselves, raise issues, take action, and interact with their city. However, lack of information or motivation often prevents citizens from regularly contributing to the common good. In this paper, we present DoGood, a mobile app that aims at motivating citizens to join civic activities in their local community. Our study asks to what extent gamification can motivate users to participate in civic activities. The term civic activity is not (...) yet well defined, so we collect activities citizens consider to be civic to work towards a broadly accepted definition of the term. The DoGood app uses gamified elements that we studied to gauge their role in encouraging citizens to submit and promote their civic activities as well as to join the activities of others. DoGood was implemented and deployed to citizens in a five-week-long user study. The app succeeded in motivating most of its users to do more civic activities, and its gamified elements were well received. (shrink)

Coordination is a key problem for addressing goal–action gaps in many human endeavors. We define interpersonal coordination as a type of communicative action characterized by low interpersonal belief and goal conflict. Such situations are particularly well described as having collectively “intelligent”, “common good” solutions, viz., ones that almost everyone would agree constitute social improvements. Coordination is useful across the spectrum of interpersonal communication—from isolated individuals to organizational teams. Much attention has been paid to coordination in teams and organizations. In this (...) paper we focus on the looser interpersonal structures we call active support networks, and on technology that meets their needs. We describe two needfinding investigations focused on social support, which examined four application areas for improving coordination in ASNs: academic coaching, vocational training, early learning intervention, and volunteer coordination; and existing technology relevant to ASNs. We find a thus-far unmet need for personal task management software that allows smooth integration with an individual’s active support network. Based on identified needs, we then describe an open architecture for coordination that has been developed into working software. The design includes a set of capabilities we call “social prompting”, as well as templates for accomplishing multi-task goals, and an engine that controls coordination in the network. The resulting tool is currently available and in continuing development. We explain its use in ASNs with an example. Follow-up studies are underway in which the technology is being applied in existing support networks. (shrink)

In this paper, we present a design tool, the positioning cards that we have developed, validated, and used in different projects. These cards are built to allow CI4CG and Participatory Design researchers to discuss the political alignment of design projects, in iterative processes of design involving people in the definition of the technological features to be implemented. The background of the cards is the conceptualization of contemporary participatory design as public design, engaging with societally relevant phenomena outside the traditional environment (...) of the workplace. To engage with such an extended dimension of participatory design, we frame our contribution in the contemporary form of capitalism, stressing how contemporary capitalism dispossess the wealth created by social production. In this context, we argue, CI4CG designers need to engage deeply with the theoretical implications of their work. To support this effort, we built the cards combining a political perspective oriented toward nourishing the common—the ensemble of the material and symbolic elements tieing together human beings—with the “affect turn” in the social sciences—therefore including affective dimensions like joy, sadness, and desire in the design of CI4CG technologies. In the final part of the article we discuss how we have used the cards in four different projects. (shrink)

When it comes to improving the health of the general population, mHealth technologies with self-monitoring and intervention components hold a lot of promise. We argue, however, that due to various factors such as access, targeting, personal resources or incentives, self-monitoring applications run the risk of increasing health inequalities, thereby creating a problem of social justice. We review empirical evidence for “intervention-generated” inequalities, present arguments that self-monitoring applications are still morally acceptable, and develop approaches to avoid the promotion of health inequalities (...) through self-monitoring applications. (shrink)

A classiﬁcation of the global catastrophic risks of AI is presented, along with a comprehensive list of previously identiﬁed risks. This classiﬁcation allows the identiﬁcation of several new risks. We show that at each level of AI’s intelligence power, separate types of possible catastrophes dominate. Our classiﬁcation demonstrates that the ﬁeld of AI risks is diverse, and includes many scenarios beyond the commonly discussed cases of a paperclip maximizer or robot-caused unemployment. Global catastrophic failure could happen at various levels of (...) AI development, namely, (1) before it starts self-improvement, (2) during its takeoﬀ, when it uses various instruments to escape its initial conﬁnement, or (3) after it successfully takes over the world and starts to implement its goal system, which could be plainly unaligned, or feature-ﬂawed friendliness. AI could also halt at later stages of its development either due to technical glitches or ontological problems. Overall, we identiﬁed around several dozen scenarios of AI-driven global catastrophe. The extent of this list illustrates that there is no one simple solution to the problem of AI safety, and that AI safety theory is complex and must be customized for each AI development level. (shrink)

The present paper shows how statistical learning theory and machine learning models can be used to enhance understanding of AI-related epistemological issues regarding inductive reasoning and reliability of generalisations. Towards this aim, the paper proceeds as follows. First, it expounds Price’s dual image of representation in terms of the notions of e-representations and i-representations that constitute subject naturalism. For Price, this is not a strictly anti-representationalist position but rather a dualist one (e- and i-representations). Second, the paper links this debate (...) with machine learning in terms of statistical learning theory becoming more viable epistemological tool when it abandons the perspective of object naturalism. The paper then argues that machine learning grounds a form of knowing that can be understood in terms of e- and i-representation learning. Third, this synthesis shows a way of analysing inductive reasoning in terms of reliability of generalisations stemming from a structure of e- and i-representations. In the age of Artificial Intelligence, connecting Price’s dual view of representation with Deep Learning provides an epistemological way forward and even perhaps an approach to how knowing is possible. (shrink)