Contents

On 25 April 2018, the Europaen Commission published a Communication in which it announced an ambitious European Strategy for Artificial Intelligence (AI). The major advances in AI over the last decade revealed its capacity as a general-purpose technology and pushed inventions in areas of mobility, healthcare, home & service robotics, education and cyber security, to name just a few. These AI-enabled developments have the capability to generate tremendous benefits not only for individuals but also for the society as a whole. AI has also promising capabilities when it comes to address and resolve the grand challenges, such as climate change or global health and wellbeing, as expressed in the United Nations Sustainable Development goals. In competition with other key players, like the United States and China, Europe needs to leverage its current strengths, foster the enablers for innovation and technology uptake and find its unique selling proposition in AI to ensure a competitive advantage and a prosperous economic development in its Member States. At the same time, AI comes with risks and challenges associated to fundamental human rights and ethics. Europe therefore must ensure to craft a strategy that maximizes the benefits of AI while minimizing its risks.

The past decade has seen increasing deployment of powerful automated decision-making systems in settings ranging from smile detection on mobile phone cameras to control of safety-critical systems. While evidently powerful in solving complex tasks, these systems are typically completely opaque, i.e. they provide hardly any mechanisms to explore and understand their behaviour and the reasons underlying their decisions. This opaqueness raises numerous legal, ethical and practical concerns, which have led to initiatives and recommendations on how to address these problems, calling for greater scrutiny in the deployment of automated decision-making systems. Clearly, joint efforts are required across technical, legal, sociological and ethical domains to address these increasingly pressing issues.

by Riccardo Guidotti, Anna Monreale and Dino Pedreschi (KDDLab, ISTI-CNR Pisa and University of Pisa)

Explainable AI is an essential component of a “Human AI”, i.e., an AI that expands human experience, instead of replacing it. It will be impossible to gain the trust of people in AI tools that make crucial decisions in an opaque way without explaining the rationale followed, especially in areas where we do not want to completely delegate decisions to machines.

In recent years, expert intuition has been a hot topic within the discipline of psychology and decision making. The results of this research can help in understanding deep learning; the driving force behind the AI renaissance, which started in 2012.

The astonishing and cryptic effectiveness of Deep Neural Networks comes with the critical vulnerability to adversarial inputs — samples maliciously crafted to confuse and hinder machine learning models. Insights into the internal representations learned by deep models can help to explain their decisions and estimate their confidence, which can enable us to trace, characterise, and filter out adversarial attacks.

With the desire and need to be able to trust decision making systems, understanding the inner workings of complex deep learning neural network architectures may soon replace qualitative or quantitative performance as the primary focus of investigation and measure of success. We report on a study investigating a complex deep learning neural network architecture aimed at detecting causality relations between pairs of statements. It demonstrates the need to obtain a better understanding of what actually constitutes sufficient and useful insights into the behaviour of such architectures that go beyond mere transformation into rule-based representations.

We introduce a Clinical Decision Support System (CDSS) as an operation of translational medicine. It is based on random forests, is personalisable and allows a clear insight into the decision making process. A well-structured rule set is created and every rule of the decision making process can be observed by the user (physician). Furthermore, the user has an impact on the creation of the final rule set and the algorithm allows the comparison of different diseases as well as regional differences in the same disease.

From screening diseases to personalised precision treatments, AI is showing promise in healthcare. But how comfortable should we feel about giving black box algorithms the power to heal or kill us?In healthcare, trust is the basis of the doctor-patient relationship. A patient expects the doctor to act reliably and with precision and to explain options and decisions. The same accuracy and transparency should be expected of computational systems redefining the workflow in healthcare. Since such systems have inherent uncertainties, it is imperative to understand a) the reasoning behind such decisions and b) why mistakes occur. Anything short of this transparency will adversely affect the fabric of trust in these systems and consequently impact the doctor-patient relationship.

Artificial Intelligence (AI) applications may have different ethical and legal implications depending on the domain. One application of AI is analysis of video-interviews during the recruitment process. There are pros and cons to using AI in this context, and potential ethical and legal consequences for candidates, companies and states. There is a deficit of regulation of these systems, and a need for external and neutral auditing of the types of analysis made in interviews. We propose a multi-agent system architecture for further control and neutral auditing to guarantee a fair, inclusive and accurate AI and to reduce the potential for discrimination, for example on the basis of race or gender, in the job market.

Automated decision-making has the potential to increase both productivity and competitiveness as well as compensate for well-known human biases and cognitive flaws [1]. But today’s powerful machine-learning based technical solutions also bring about problems of their own – not least in terms of being uncomfortably black-box like. A new research project at RISE Research Institutes of Sweden, in collaboration with KTH Royal Institute of Technology, has recently been set up to study transparency in the insurance industry, a sector that is poised to undergo technological disruption.

“Cyber threat intelligence” is security-relevant information, often directly derived from cyber incidents that enables comprehensive protection against upcoming cyber-attacks. However, collecting and transforming the available low-level data into high-level threat intelligence is usually time-consuming and requires extensive manual work as well as in-depth domain knowledge. INDICÆTING supports this procedure by developing and applying machine learning algorithms that automatically detect anomalies in the monitored system behaviour, correlate affected events to generate multi-step attack models and aggregate them to generate usable threat intelligence.

In complex production environments, understanding the results of a scheduling algorithm is a challenging task. To avoid tardy work orders, proALPHA and Fraunhofer ITWM developed a component for identifying practical tardiness reasons and appropriate countermeasures.

Science is in revolution. The formidable scientific and technological developments of the last century have dramatically transformed the way in which we conduct scientific research. The knowledge and applications that science produces has profound consequences on our society, both at the global level (for example, climate change) and the individual level (for example, impact of mobile devices on our daily lives). These developments also have a profound impact on the way scientists are working today and will work in the future. In particular, informatics and mathematics have changed the way we deal with data, simulations, models and digital twins, publications, and importantly, also with ethics.

The use of machine learning in decision-making has triggered an intense debate about “fair algorithms”. Given that fairness intuitions differ and can led to conflicting technical requirements, there is a pressing need to integrate ethical thinking into research and design of machine learning. We outline a framework showing how this can be done.

To accelerate the adoption of reproducible research methods, researchers from CNRS and Inria have designed a MOOC targeting PhD students, research scientists and engineers working in any scientific domain.

An estimated 85 % of global health research investment is wasted [1]; a total of one hundred billion US dollars in the year 2009 when it was estimated. The movement to reduce this waste recommends that previous studies be taken into account when prioritising, designing and interpreting new research. Yet current practice to summarize previous studies ignores two crucial aspects: promising initial results are more likely to develop into (large) series of studies than their disappointing counterparts, and conclusive studies are more likely to trigger meta-analyses than not so noteworthy findings. Failing to account for these apects introduces ‘accumulation bias’, a term coined by our Machine Learning research group to study all possible dependencies potentially involved in meta-analysis. Accumulation bias asks for new statistical methods to limit incorrect decisions from health research while avoiding research waste.

Training for radiological events is time consuming and risky. In contrast to real sources, a prototype augmented reality system lets trainees and trainers safely learn about the necessary detection, identification and decontamination steps.

In a pilot-study, an urban road environment detection function was considered for smart cars, as well as for self-driving cars. The implemented artificial neural network (ANN) based algorithms use the traffic sign (TS) and/or crossroad (CR) occurrences, along a route, as input. The TS-based and the CR-based classifiers were then merged into a compound one. The way this was accomplished serves as a simple, practical example of how to build upon modularity and how to retain some degree of it in functioning ANNs.

BBTalk is an online service designed to support collaborative interdisciplinary development and extension of thesauri. At present, it serves to support the curation of the BackBone Thesaurus (BBT), a meta-thesaurus for the humanities. This service allows for the transparent, community development of the BackBone Thesaurus by enabling users to submit suggestions for changes and additions to the terminology, as well as link specialist thesauri to the meta-thesaurus terms, while enabling the thesauri curators to jointly edit, add and delete terminology. This model of cooperative editing is linked to an online discussion system that allows thesauri curators to confer with one another, exchange views and ideas and finally determine any necessary changes to the BBT.

Data driven prognostic systems enable us to send out an early warning of machine failure in order to reduce the cost of failures and maintenance and to improve the management of the maintenance schedule. For this purpose, robust prognostic algorithms such as deep neural networks are used whose put is often difficult to interpret and comprehend. We investigate these models with the aim of moving towards a transparent and understandable model which can be applied on critical applications such as within the manufacturing industry.

Effective and efficient communication and cooperation needs a semantically precise terminology, especially in disaster management, owing to the inherent urgency, time pressure, stress and often cultural differences of interventions. The European project Driver+ aims to measure the similarities between different countries’ terminologies surrounding disaster management. Each definition is characterised by a set of “descriptors” selected from a predefined anthology (the “bag-of-words”). The number of identical/different descriptors serves as a measure of the semantic similarity/difference of individual definitions and is translated into a numeric “degree of similarity”. The translation considers logical and intuitive aspects. Human judgment and mechanical derivation in the process are clearly separated and identified. By exchanging the ontology this method will also be applicable to other domains.

Deep neural networks have pushed the boundaries of artificial intelligence but their training requires vast amounts of data and high performance hardware. While truly digitised companies easily cope with these prerequisites, traditional industries still often lack the kind of data or infrastructures the current generation of end-to-end machine learning depends on. The Fraunhofer Center for Machine Learning therefore develops novel solutions which are informed by expert knowledge. These typically require less training data and are more transparent in their decision-making processes.

on the subject of Polynomial Optimization, Efficiency through Moments and Algebra at eleven European Research Institutes and Universitites

The Innovative Training Network POEMA is hiring 15 Doctoral Students starting from September 2019. The proposed projects will be investigating the development of new algebraic and geometric methods combined with computer algebra techniques for global non-linear optimization problems. Applications will focus on smarter cities challenges, urban traffic management, water network management, energy flow control, or environmental monitoring.

The Dutch youth care organization Spirit, CWI and Universidad Politécnica de Madrid (UPM) join forces in the new innovation activity of EIT Digital “G-Moji” ‒ a smartphone application to improve lives of youth at risk. G-Moji aims to support youth with mental health issues. In recent years,

ERCIM participates in DataMarketServices (DMS), a new H2020 project whose objective is to overcome the barriers of data-centric European SMEs and start-ups by providing free support services around data skills, entrepreneurial opportunities, legal issues and standardization. The expected project deliverables are a 100-data-based companies portfolio, twelve free support services, webinars and training in topics such as GDPR, IPR, etc.

World renowned Web experts were attending the W3C TPAC ‘18 meeting in October in Lyon, and the W3C developer relations team seized this opportunity to organize a developer meetup featuring five prominent speakers and twelve demonstrations.