Technical standards

Quick links

Updates

The international organisation for standardization (ISO) assembled a working group on drafting the cybersecurity standards around consumer’s electronics to ensure the consumer privacy is embedded into the design. This standard is also referred as ‘privacy by design’ standard. The ISO project committee, ISO/PC 317 will also look at the impact of artificial intelligence, data protection and sharing economy on the future of consumer experience. Working group will work having on mind the set of standards related to the cybersecurity and consumers electronics (IT Security techniques). The concept of privacy by design is already featured in a recent EU General Data Protection Regulation.

The US Department of Commerce and the Department of Homeland Security have released a draft report on 'Enhancing the Resilience of the Internet and Communications Ecosystem Against Botnets and Other Automated, Distributed Threats'. The report comes in response to President Trump's Executive Order on Strengthening the Cybersecurity of Federal Networks and Critical Infrastructure (from May 2017). It looks at challenges and opportunities in reducing the botnet threats that 'capitalise on the sheer number of Internet of Things devices', with goals related to a more secure technology marketplace, innovations in network infrastructure and applications, coalitions among security and technical communities on national and international level, and awareness and education. It outlines several recommendations, such as: establish broadly accepted baseline security profiles for IoT devices in home and industrial applications, and promote international adoption through bilateral arrangements and the use of international standards including IPv6 implementation; more efforts from the industry to develop innovative solutions for preventing and mitigating distributed threats; collaboration between government and industry to ensure existing best practices, frameworks, and guidelines relevant to IoT are more widely adopted; and promoting the international adoption of best practices and relevant tools through bilateral and multilateral international engagement efforts. The report is open for public comment until mid February 2018.

The 3rd Generation Partnership Project (3GPP), telecommunications industry standards body, approved the first standard for non-standalone 5G New Radio six months earlier than expected. This specification will use existing 4G infrastructure and it will form a basis for deployment of commercial 5G products. Some carriers already announced 5G implementation plans. However, the 5G standard has not yet been fully developed. This release is a part of the first phase of the two-phase 5G standardisation process within 3GPP.

The Internet Engineering Task Force (IETF) has published RFC 8200, making the Internet Protocol version 6 (IPv6) a full Internet Standard. As explained on the Internet Society’s website, although IPv6 was already defined in RFC 2460 (updated by several other RFCs), this was a draft standard. What IETF did with RFC 8200 was to combine these many RFCs defining the IPv6 specification, into a single RFC, together with an Errata. So, technically speaking, there are no changes in the IPv6 specifications themselves, but IPv6 is now a full Internet Standard, defined in a single RFC.

South Korea plans to submit national 5G standards to the ITU in February 2018. South Korea is the first country to prepare 5G standards from a national perspective, informs the etnews server. It is expected that the proposal will include standards that are being developed by private companies and private standardisation organisations. South Korea wishes to become the first country with a commercial 5G deployment in 2019. This plan is backed by the country’s long-term strategical need to influence the global 5G standardisation process in favour of their technology, standards and norms.

ITU drafted a report on minimum network requirements of 5G networks. The proposed framing standard is to be approved by the ITU-R Study Group 5 in November 2017 and should serve as a basis for further standardisation of International Mobile Technology 2020 environment (IMT-2020) and its 5G networks. The document expects three main scenarios in the IMT-2020 networks. Enhanced mobile broadband (eMBB), ultra-reliable and low-latency communications (URLLC), and massive machine type communications (mMTC). To meet the 5G criteria the network has to offer 20 Gbit/s (download) a 10 Gbit/s (upload) speeds in laboratory conditions. The real conditions in dense urban areas will provide much lower speeds. A common user experience is expected to be 100 Mbit/s (download) and 50 Mbit/s (upload).

What are technical standards?

The Internet technical standards and services form the infrastructure that makes the Internet work, and include the Transmission Control Protocol/Internet Protocol (TCP/IP), the domain name system (DNS), and the secure sockets layer (SSL). Standards ensure that hardware and software developed or manufactured by developed entities can work together as seamlessly as possible. Standards therefore guide the technical community, including manufacturers, to develop interoperable hardware and software.

TCP/IP is the main Internet technical standard. It is based on three principles: packet-switching, end-to-end networking, and robustness. Internet governance related to TCP/IP has two important aspects: the introduction of new standards - an aspect that is shared by technical standards in general - and the distribution of IP numbers, which is explained in more detail in the section on IP numbers.

Setting technical standards

Technical standards are increasingly being set by private and professional institutions. The Internet Architecture Board (IAB) oversees the technical and engineering development of the Internet, while most standards are set by the Internet Engineering Task Force (IETF) as Request for Comments (RFC). Both the IAB and the IETF have their institutional home within the Internet Society (ISOC).

Other institutions include: the Institute of Electrical and Electronic Engineers (IEEE), which develops standards such as the WiFi standard (IEEE 802.11b); the WiFi Internet Governance Alliance, which is the certification body for WiFi-compatible equipment; and the Groupe Speciale Mobile Association (GSMA), which develops standards for mobile networks.

Standards that are open (open Internet standards) allow developers to set up new services without requiring permission. Examples include the World Wide Web and a range of Internet protocols. The open approach to standards development has been affirmed by a number of institutions. One such affirmation is the Open Stand initiative, endorsed by bodies including IEEE, IETF, IAB, the World Wide Web Consortium (W3C), and the Internet Society.

Technology, standards, and policy

The relevance of setting or implementing standards in such a fast developing market gives standard-setting bodies a considerable amount of influence.

Technical standards could have far-reaching economic and social consequences, promoting specific interests and altering the balance of power between competing businesses and/or national interests. Standards are essential for the Internet. Through standards and software design, Internet developers can shape how human rights are used and protected (e.g. freedom of information, privacy, and data protection).

Efforts to create formal standards bring private technical decisions made by system builders into the public realm; in this way, standards battles can bring to light unspoken assumptions and conflicts of interest. The very passion with which stakeholders contest standards decisions should alert us to the deeper meaning beneath the nuts and bolts.

Possible gaps in dealing with technical standards

Non-technical aspects - such as security, human rights, and competition policy - may not be sufficiently covered during the process of developing technical standards. For instance, most of the past developments of Internet standards aimed at improving performance or introducing new applications, whereas security was not a priority. It is now unclear whether the IETF will be able to change standards to provide proper authentication and, ultimately, reduce the misuse of the Internet (e.g. spam, cybercrime).

Given the controversy surrounding any changes to basic Internet standards, it is likely that security-related improvements in the basic Internet protocol will be gradual and slow. Yet decisive steps are starting to be implemented in this direction, with the Domain Name System Security Extensions (DNSSEC) being a good illustrative example. Following almost 12 years of research, trials, and debates within the technical community, DNSSEC first started to be deployed for some ccTLDs and from 2010 it was also implemented at the root server level. However, further challenges reside in the large-scale adoption of this new security standard down the ladder by the domain name registrars, ISPs, and website owners.

As with web standards, there appears to be a gap in the participation of stakeholders in the development of technical standards. Even though participation is open to all stakeholders groups, some submissions to the WGEC/correspondence group have noted the need for more involvement from specific stakeholder groups such as governments.

Instruments

Conventions

Telecommunications have been subject of international regulation for a long time. In 1885, the International Telegraph Union (which later on became the International Telecommunication Union) initiated work on elaborating international legislation for telephony; initial provisions on the unit of charge and the length of a call were initially set in the Telegraph Regulations. In 1932, the first Telephone Regulations were adopted, and they were aimed to apply only to the international telephone services in the ‘European system’ - all countries in Europe and countries outside Europe which declare that they belong to this system. In 1988, the Telegraph Regulations and the Telephone Regulations were merged into a single treaty - the International Telecommunication Regulations (ITRs), whose aim was to ‘facilitate global interconnection and interoperability of telecommunication facilities and to promote the the harmonious development and efficient operation of technical facilities, as well as the efficiency, usefulness and availability to the public of international telecommunication services’.

Unchanged for more than two decades, the ITRs were revised in 2012, in the framework of the World Conference on International Telecommunications (WCIT). Heated discussions were held during this conference on whether the Regulations should continue to apply only to traditional telecommunications, or be extended to also cover the Internet; several proposals were made that could have had a significant impact on how the Internet functions, on its underlying principles, as well as on Internet security and Internet content related issues. However, the final text on the revised Regulations did not contain specific provisions dealing with the Internet, as member states could not reach consensus in this regard.

While maintaining the the initial aim of facilitating interconnection and interoperability of telecommunications, the 2012 ITRs deal with such as: international telecommunication network and services and the responsibilities of member states in their development, functioning and evolution; the priority of safety of life telecommunications; the obligation of member states to endeavour to ensure the security and robustness of international telecommunication networks, as well as to take necessary measures to prevent the propagation of unsolicited bulk electronic communications; provisions of charging and accounting for international telecommunication services; member states obligations in the case of suspension of international telecommunication services; the obligation of member states to promote access for persons with disabilities to international telecommunication services.

Although the text of the revised ITRs does not contain specific provisions regarding the Internet, some of the final provisions were seen by certain member states as potentially covering Internet issues. For example, the new provision according to which the Regulations would be applicable to ‘ those operating agencies, authorized or recognized by a Member State, to establish, operate and engage in international telecommunications services to the public’ was seen as broadening the scope of the ITRs to include Internet service providers. Controversy also surrounded the introduction of provisions concerning network security and unsolicited electronic communications; as there are associated with the broader concept of cybersecurity (including in relation to spam in the context of emails), it was argued that it is difficult to interpret these provisions as being applicable only to traditional telecommunications (and not the Internet). In addition to the different interpretation of such provisions, the telecommunication definition itself (unchanged as compared to the 1988 ITRs) is seen by some as also covering communications made via the Internet: ‘any transmission, emission or reception of signs, signals, writing, images and sounds or intelligence of any nature by wire, radio, optical or other electromagnetic systems’.

Such and other disputes lead to the final ITRs being signed by only 89 out of the 144 delegations with voting rights at WCIT; some of the countries not signing the Regulations were the United States, members of the European Union, Norway, Switzerland, Australia, and Canada. ITU member states that have not signed the revised ITRs continue to be bound by the 1988 version of the Regulations.

Implementation of the ITRs lies solely in the responsibility of signatory member states, and it is done through national legislations or regulations.

Telecommunications have been subject of international regulation for a long time. In 1885, the International Telegraph Union (which later on became the International Telecommunication Union) initiated work on elaborating international legislation for telephony; initial provisions on the unit of charge and the length of a call were initially set in the Telegraph Regulations. In 1932, the first Telephone Regulations were adopted, and they were aimed to apply only to the international telephone services in the ‘European system’ - all countries in Europe and countries outside Europe which declare that they belong to this system.

In 1988, the Telegraph Regulations and the Telephone Regulations were merged into a single treaty - the International Telecommunication Regulations (ITRs). Signed by 122 countries, the Regulations were adopted with the aim to ‘facilitate global interconnection and interoperability of telecommunication facilities and to promote the the harmonious development and efficient operation of technical facilities, as well as the efficiency, usefulness and availability to the public of international telecommunication services’. The Regulations introduced a definition for telecommunications, described as ‘any transmission, emission or reception of signs, signals, writing, images and sounds or intelligence of any nature by wire, radio, optical or other electromagnetic systems’. The ITRs then outline a series of principles regarding the operation and maintenance of the ‘international telecommunication network’, and the provision of international telecommunication services, as well as the related responsibilities of member states and network operators (mostly state-owned at that time). A common set of principles for establishing and collecting charges for international telecommunication services, as well as for accounting rates and monetary units for such charges are also included in the Regulations. Moreover, the right of ITU member states to suspend international telecommunication services (initially set in the ITU Convention) is reiterated here, and an obligation is introduced for member state that exercise such rights to notify the ITU Secretary General.

The 1988 Regulations provided the general ‘regulatory’ framework for the provision and operation of international telecommunication networks and services. Adopted at a time when the Internet was very much in its early stages of development, the ITRs did not deal with Internet related issues. But they gained the attention of the Internet governance community in 2012, in the context of the ITU-led process aimed at revising the Regulations. Following a number of revision proposals made by both member states and other actors, the intergovernmental negotiation process on the review of the ITR was held in the context of the 2012 World Conference on International Telecommunications (WCIT). Some of the discussed proposals were aimed at extending the applicability of the Regulations to also cover communications transmitted via the Internet. Such proposals were not well received by several ITU member states, and no consensus on such an extension could be reached by the end of the Conference. Nevertheless, a revised version of the Regulations was adopted at that time, and it was signed by 89 out of the 144 delegations with voting rights. The countries that refused to sign the revised ITRs argued that, although the document itself did not make specific references to the Internet, some of the adopted revisions (such as those related to security and spam), could be interpreted as covering Internet related issues as well.

Currently, both the 1988 and the 2012 versions of the ITRs are in force. The 1988 Regulations remain applicable to countries that have refused to become party to the 2012 revised treaty. The 2012 revised Regulations became applicable to signatory states in January 2015. Relations between a non-party to the 2012 treaty and a party to the 2012 treaty are governed by the 1988 treaty.

Implementation of the ITRs lies solely in the responsibility of signatory member states, and it is done through national legislations or regulations.

Standards

With the rapid adoption of cloud computing services, which allow the storing and accessing of data, applications, and services in and from cloud servers, there are increasing concerns over the security of such services. The distributed nature of cloud computing, the vast amount of data stored in the cloud, and the possibility to remotely access resources stored in the cloud make cloud computing more vulnerable to security threats and challenges that other storage modalities.

Providers of cloud computing services are continuously looking into solutions for enhancing the security of their services, and, therefore, increasing the confidence of their users. At the same time, technical organisations and standardisation bodies are exploring possibilities for developing standards and recommendations specifically addressing the issue of cloud security.

The ITU-T Recommendation X.1601 on a ‘Security framework for cloud computing’ , adopted in October 2015, gives an overview of security threats and challenges related to cloud computing, and outlines a number of security capabilities that could be deployed against such threats and challenges. Some of the security threats and challenges described in the recommendation include: data loss and leakage, insecure service access, insider threats unauthorised administration assess, loss of trust, loss of confidentiality, service unavailability, loss of software integrity, and jurisdictional conflict. According to the recommendation, such threats and challenges could be tackled with the implementation of security capabilities such as: trust models for identity and access management systems that contribute to the confidentiality, integrity and availability of services and resources; interface security, ensured through mechanisms such as unilateral/mutual authentication, end-to-end encryption, and digital signatures; network security; data isolation and protection; incident management and disaster recovery; and interoperability, portability and reversibility.

As cloud computing technologies and services continue to evolve and be more and more used as an alternative to the local storage of data and applications, security challenges will also continue to grow. In this context, there is an intensification of efforts aimed at developing and implementing standards, recommendations, and solutions addressing the increasing security risks and challenges. As an example, the ITU-T Study Group 17 continues its work in areas such as: requirements for software as a service application environments, operational security for cloud computing, and cloud service customer data security.

Big data refers to large masses of data which require non-traditional data processing applications. Due to size and complexity, big data poses a challenge to analyse, store, transfer, and visualise; efficient analysis within required timeframes is also a major challenge. The need to adopt an international standard on big data has been linked to a global adoption of big data solutions.

In November 2015, ITU members approved the first ITU standard on Big Data. Recommendation Y.3600 on 'Big data – Cloud computing based requirements and capabilities' describes the meaning of Big Data and the characteristics of the Big Data ecosystem from a standardization perspective, and provides requirements, capabilities and use cases of cloud computing-based big data for large data sets which cannot be rapidly transferred and analysed using traditional technologies. The standard in fact outlines how cloud computing systems can be leveraged to provide Big Data services, thereby assisting the industry in managing large data sets.

Big data standardisation activities within the ITU falls under the responsibility of Study Group 13 – responsible for future networks, cloud computing and network aspects of mobile communications – within the ITU’s Telecommunication Standardization Sector.

Until the adoption of the new international standard, global standards were seen as missing a key ingredient; the adoption of such standard was viewed as a challenge in terms of global adoption of big data solutions in a wider range of scenarios. Although analysts believe that market forces push vendors to establish interoperability on their own before official standards are agreed upon, the newly adopted standard will aim 'to build cohesion in the terminology used to describe cloud-based big data, and to offer a common basis for the development of big data services and supporting technical standards.'

In technology, big data can make important contributions to development, and can help with relief efforts in cases of natural disasters or outbreak of disease. Researchers believe that big data analysis helps improve decision-making in areas such as health care, crime, security, and economic productivity.

The concept of 'Internet of Things' (IoT) generally refers to a network of interconnected physical and virtual devices or objects that use the Internet to exchange data with manufacturers, operators, and among themselves. IoT applications can be found in areas such as transportation, energy, home appliances, medical and healthcare devices, environment, retail, and agriculture.

A more formal definition of the Internet of Things has been elaborated in the framework of the Telecommunication Standardization Sector (ITU-T) of the International Telecommunication Union, and published in the Recommendation ITU-T Y.2060 ‘Overview of the Internet of things’ (adopted in June 2012). According to this recommendation, the Internet of Things represents ‘a global infrastructure for the information society, enabling advanced services by interconnecting (physical and virtual) things based on existing and evolving interoperable information and communication technologies. The recommendation also provides a technical overview of the Internet of Things, and outlines the fundamental characteristics (such as interconnectivity, heterogeneity, and dynamic changes) and high-level requirements (such as identification-based connectivity, interoperability, autonomic networking, privacy, and data protection) of the IoT. In addition, the IoT reference model is explained, and details are provided on its components: the four layers (application, service and application support, network, and device), as well as management and security capabilities.

The Recommendation ITU-T Y.2060 is a result of the work carried out by the ITU-T Study Group 13, which is responsible for developing standards and recommendations covering future networks, including cloud computing, mobile, and next-generation networks. While the scope of this group continue to include IoT related issues (such as support of IoT in next generation networks), more specific IoT standardisation work is now carried out within the ITU-T Study Group 20 (created in 2015), whose initial focus is on IoT applications in smart cities and communities.

The increasing availability of IoT devices and applications, and their expansion into new areas are expected to bring significant advantages, but also challenges. Mitigating the security threats and weaknesses of IoT services, and ensuring the protection of privacy and personal data in the context of IoT data transmission are only two examples of such challenges, which are and will continue to be looked into carefully especially by the private sector and the technical community.

The interactive format of the session was explained and the audience was split into three groups representing the perspectives of the manufacturer, the user, and the policymakers when discussing the different aspects of the Internet of Things (IoTs);privacy, security, and economics.

The group on policymakingunder thefacilitation of Tropina, built their discussions around the experience of the UK government in supporting the research and development of IoT and engaging with businesses and citizens to advance UK leadership in IoT applicability. The goal of their initiative is to propose commercial incentives for manufacturers to ensure the development of IoT for healthcare services, transportation and smart cities. The group came to the idea that privacy and security by design should be a priority for IoT devices and software. However, policymakers should work with the industry to set standards at the global level to ensure a cross-border flow of IoT technologies and devices, and most importantly, to prevent counterfeit which would endanger security and privacy tremendously. For this reason, it would be good to involve international standardisation organisations. Finally, the group agreed on the necessity to find reliable metrics for checking the progress of IoT deployment and how it really contributes to economic growth.

The second group’s discussion was led by Koch and focused on the manufacturer’s perspective, with most of the discussion being on security. However, as businesses, their foremost priority is to sell products, and it was roughly agreed that economics was the driving factor behind having security or privacy on the agenda for IoT manufacturing. Following the roll-out of the General Data Protection Regulation (GDPR), privacy and security became an economic consideration as well. Since businesses mainly run on consumer/user demand, the group also argued that demanding security was the consumer’s responsibility at the end of the day. The layers of security, from the design and manufacturing of the microchips to software, were discussed, and companies who take on all layers of production were mentioned as examples of efforts to increase product security. Another point made was that the IoT was not only there for end users and was not always connected to the Internet, but a big part of the industry was built upon business to business applications for logistics, manufacturing, transportation, environmental monitoring, industries and so on.

The third group focused on the user’s perspective andstarted the discussion by trying to formulate the questions that they saw as relevant to making informed decisions relating to connected products and devices, whether security was a concern, and how a consumer can learn about quality and security when it comes to devices whose technical functioning is not necessarily intuitive. Some of the other points raised by the discussion group include:

One of the key topics was whether users were ready to pay more for secured IoT devices, and participants agreed that price was a relevant component but not the only issue to be considered.

Information regarding the safety and security of connected devices need to be clear, objective and intelligible for non-experts,an excessive burden on vulnerable users who normally lack the necessary expertise will not improve the overall cybersecurity environment.

Whether through formal certification or informal mechanisms, users want devices to be tested and the results publicised, so as to ensure diversity and confrontation of views, as well as diversity of sources that are independent and, if possible, officially verifiable.

Children’s toys and devices may be a good starting point to raise awareness regarding the importance of privacy and security of connected devices, since people to tend to raise their concerns and awareness efforts when these interests are at stake.

The session continued with discussions comparing the messages and perspectives of policymakers, users and manufacturers. The question of the responsibility for security was debated in depth. Users put economics over security, which determine sector trends in IoT so education and awareness should be a priority. A solution can be that governments impose security by design on manufactures which would solve the security issue. Another important point to consider is imported products; is it a solution to tightly regulate imported IoT and have certifications? Participants from a technical background stressed that security is not a state, it constantly evolves, which poses an issue on who is responsible of security issues. In 10 or 20 years, if a manufacturer is long gone but the products are still in use who will governments and users address? Industry set standards can be a solution to these issues just like the CE standards for various products.

Final remarks included that current disclosures and disclaimers that come with connected devices were not sufficient. Additional regulation to the existing privacy regulation will likely be needed for the IoT. And in the near future, if there is a lack of consideration for privacy and security in the IoT, they may simply not be allowed on the European market.

The session was moderated by Mr Walid A Saqaf, Internet Society, and was part of EuroDIG’s educational track. At the beginning, Mr Ken Hansen, Blockchain Roadshow, offered a simplified explanation on blockchain technology. He explained that blockchain is a distributed open ledger network with no single centralised point. It operates on the P2P (peer-to-peer) distributed application architecture. Hanson clarified that bitcoin is not synonymous with blockchain, it is just one of the applications that runs on a blockchain, such as ‘Ripple’ (financial transfers solution) or ‘FlightDelay’ (instant payout in case of a delayed flight).

After Hansen’s presentation, the interactive part of the session started. It was organised as a game in which the audience learned how blocks are created inside a blockchain. ‘Miners’ (volunteers from the audience),were working as block creators and were awarded with Yummi coins (chocolate bars). Miners serve as protectors of this distributed ledger, monitoring the transactions which take place in a blockchain. The audience learned why blockchain ensures the immutability of transactions by referring to everything that happens in a network.

Continuing the discussion,Mr Anton Zurenko, Stratum, addressed the question of trust on the Internet, and the question was raised: is blockchain a technology that can help? He pointed out that blockchain is a system that requires no trust, since everything is regulated by algorithms and mathematics. It is a system that makes sure that there is a shared protocol, but that each participant still retains independence.

Ms Hannahe Boujemi, blockchain researcher, added that there is no clear picture on what should be regulated. This technology is not yet mainstream and it would be good to wait a while longer with the regulations. The EU is taking this specific approach regarding regulation. Generally speaking, regulators do not have a lot of options at the money, and they need to leave it to the market to bring more clarity.

Mr Arvin Kamberi, DiploFoundation, pointed out that blockchain was developed as an answer to the loss of trust after the 2008 financial downfall. It was created as a response to the need for a decentralised trust system. He added that is important to distinguish that, aside from open blockchains (such as the bitcoin blockchain), other blockchain applications can include permissions, and adding additional layers of security and scalability. These blockchains would be managed in a centralised way, but would significantly help in reducing cost and increasing efficiency. On the other hand, open blockchains offer a new way of system governance by emerged consensus. Since the story of developing blockchains resembles the early days of the Internet and the discussions of standardisation, Kamberi added that we might need a similar solution (for example the multistakeholder model within the Internet Corporation for Assigned Names and Numbers – ICANN).

Mr Michael Oghia, Internet governance consultant, added that one important aspect of governance is the sustainability of blockchains. When it comes to blockchains, many applications will be developed, and this might lead to significant levels of energy consumption. He added that as we move forward, this issue should be incorporated as much as possible into the Internet governance discussions.

Questions from the audience addressed the issue of regulation. Regulation needs to be done in the context of specific sectors. If it has impact on the financial sector, it should be regulated there, but not as a technology in itself. It was concluded that the Internet, as we know it today, emerged as a platform that provides services. One can anticipate that blockchains will have a similar range of applications to build on top. Some of the most prominent players are involved in harnessing this new technology for their products. That has been recognised as the main challenge for future regulation.

Publications

The latest edition of glossary, compiled by DiploFoundation, contains explanations of over 130 acronyms, initialisms, and abbreviations used in IG parlance. In addition to the complete term, most entries include a concise explanation and a link for further information.

The book, now in its sixth edition, provides a comprehensive overview of the main issues and actors in the field of Internet governance and digital policy through a practical framework for analysis, discussion, and resolution of significant issues. It has been translated into many languages.

GIP event reports

The panel discussion was co-organised by the ITU and the UN Institute for Disarmament Research (UNIDIR) and moderated by Mr Thomas Wiegand (Professor, TU Berlin, Executive Director, Fraunhofer Heinrich Hertz Institute).

In his opening remarks, Wiegand mentioned two challenging aspects for the development of artificial intelligence (AI) from the engineering perspective, and well as the ethics of it. AI should reflect what society expects from it, but it must also come equipped with important safety measures.

Mr Robert Kirkpatrick (Director, UN Global Pulse) opened the discussion by introducing five tools regarding refugees. These tools range from recognition software able to identify xenophobic content about refugees on social media, to early warning systems of vessels in the Mediterranean, to satellite imagery support.

The UN Global Pulse has been working on guidelines for the use of AI which have been adopted by a variety of UN agencies. For Kirkpatrick, the widely accepted principle of ‘do no harm’ has two aspects to it that need to be taken into account for the development of AI tools. The first implication foresees that no direct harm should come from the use of a particular technology. But more importantly, this principle indicates that every reasonable step to prevent harm from happening must be undertaken. So far, privacy regulations fall short in establishing a satisfying level of protection. Indeed, nuclear technology regulation could serve as example on how to use and regulate the use of AI.

Mr Rob McCargow (Programme Leader Artificial Intelligence and Technology, PwC UK) foresaw that the greatest impact of AI on society will come when the private sector widely adopts AI technology. Its application will then range from the medical sector to the financial sector and truly change society.

He cited some figures from the PwC’s CEO survey which is conducted at Davos every year, showing that:

72% of CEOs believe that AI will be a business advantage in the future; and

67% of CEOs believe that AI will have a negative impact on stakeholder trust.

Thus, according to the speaker, the use of AI for good would be severely damaged if the disruptive aspects of AI are not addressed early on and gain traction. He further noted that AI will fail if its solely viewed as a standalone technology development. AI alongside other technologies will have severe workforce implications and therefore, companies need to prepare for it in a multidisciplinary and multistakeholder fashion.

Once it can be guaranteed that AI is safe, it will unlock its potential for good. McCargow said that so far there is not enough appropriate governance in businesses asking the right questions about the use and implementation of new technologies.

Mr Wojciech Samek (Head of Machine Learning Group, Fraunhofer Heinrich Hertz Institute) noted that one of the challenges of embracing AI for good stems from the fact that we do not understand how and why AI arrives to certain conclusions. To some extent, it can be viewed as a black box, where we fail to understand why certain methods work or fail. In order to build trust, it is therefore important to know and understand how these processes work and provide researchers with tools to interpret AI-generated results.

The interpretability of outcomes is also very important in terms of legal accountability and scientific progress. Results obtained through the use of AI need to be explainable and reproducible in order to unfold their full potential.

Tentative steps in that direction have been undertaken by Samek and his team who developed an application that visualise how AI image recognition operates. The AI algorithms were fed images of animals to be recognised and classified automatically by the software. Through the software, the researchers were able to identify the areas of the image that the algorithm had analysed to recognise the animal. They discovered that the software did not analyse the shape and features of the depicted animal, but rather scanned the small copyright signs on the bottom of the image, a sign that the AI had used deep learning to identify an animal. Samek points to the importance of being able to verify the predictions made by AI and to know how it comes to its conclusions.

During his speech, Mr Toufi Saliba (AI Decentralized) indicated that the way in which we judge the data will always be subjective. Our expectations of the outcome will always be biased in a certain way and we therefore have to look at feeding the learning patterns more precise data to teach the softwe how to come to our expected conclusions.

The criteria for AI’s operability should thus not be solely result-oriented but instead, should be focused on the input we provide it with.

Saliba further questioned our understanding of AI by asking what the audience would consider to be AI before stating that Bitcoin could be considered a form of AI because of its modus operandi: a machine that is incentivised to compete for resources and is not owned or directly controlled by humans.

According to Saliba, the question of regulating AI is central because it will define whether AI can liberate humanity or become one of its greatest challenges. Ethical considerations therefore need to be built-in from the beginning of its inception.

Mr Andy Chen (VP of Professional and Educational Activities, Institute of Electrical and Electronics Engineers (IEEE) Computer Society) spoke about the necessity to incentivise young professionals to build-in ethics into their AI developments.

He then introduced the Mind AI project, a linear qualitative research process whose results can be easily traced back by the researchers, and which works on the basis of natural languages. Through this open-source based and accessible to everyone project, AI will help to democratise progress.

He informed the audience about some ethics projects surrounding AI from Stanford University in the US and the IEEE’s global initiative on ethical design for autonomous and intelligent systems. The IEEE’s initiative has launched a call for papers for its second edition.

Ms Susan Oh (Chair of AI, Blockchain for Impact UN GA, Founder & CEO, MKR AI) briefly introduced MKR AI which she developed as a fact-checking system that tracks patterns of deception. The platform operates through input from users who validate or invalidate information that has been analysed on the website. If certain facts or methodologies are proven to be less accurate than those of the platform, users are rewarded with tokens.

The speaker noted that machine learning and AI will heavily rely on blockchain as they progress. On the other hand, blockchain also needs AI in order to validate or signal anomalies of the ledger.

Furthermore, if people have sovereignty over their data, they can volunteer to share their data and be rewarded for it in the form of tokens that could be used for their personal benefit. This way, AI evolution would be easier to regulate than through existing methods such as hard laws because regulations tend to be unable to determine what to track and are difficult to enforce.

According to Oh, tokenising societal processes benefits the development of AI because it helps AI better understand human interactions all the while benefiting all the parties involved. AI systems in combination with cryptocurrencies and other types of blockchains will provide a more transparent way of operating within society and incentivise collaboration among users.

This session explored the need for a common framework for data and artificial intelligence (AI), allowing stakeholders to work together to make AI for good a reality. The moderator, MrAmir Banifatemi (AI Lead at XPRIZE Foundation) reminded the participants about the twofold aims of the summit: identifying practical applications of AI to accelerate progress towards the sustainable development goals (SDGs), as well as formulating strategies to ensure the trusted, safe, and inclusive development and dissemination of AI technology, and the equitable access to its benefits.

Connecting remotely, Mr Wendell Wallach (Consultant, Ethicist, and Scholar at Yale University’s Interdisciplinary Center for Bioethics) recommended focusing on agile and comprehensive governance for AI to ensure that its adoption benefits humanity and minimises its potential harms. Comprehensive governance, ranging from technological solutions and standards to corporate oversight and soft law, can provide an agile way of managing this challenge. In this context, Wallach presented the Building Global Infrastructure for the comprehensive governance of AI (BGI for AI) initiative to resolve not only the technological, but also the political and practical challenges raised by AI through agile, comprehensive governance.

After providing an overview of UN initiatives which touch upon AI-related issues (the High Level Committee on Programmes, the Internet Governance Forum, and UN-DESA’s Forum on Science, Technology and Innovation), MrVincenzo Aquaro (Chief of Digital Governance, Public Institutions and Digital Government Division, United Nations) explained that this global summit is already one of the most important international forums about AI, due to its multistakeholder and multidisciplinary nature and especially in its aim to develop – rather than report about – concrete initiatives. Aquaro reminded the participants about the SDG’s mission to leave no one behind, which should be applied to the work of the AI community as well, to be able to support the creation and promotion of AI solutions for the common good. AI should be a ‘universal resource for all of humanity, to be equally distributed, available to everyone, no matter the level of development and capacity’. Yet, he noted that one of the biggest challenges is to create a common framework to regulate the proper use of AI without stifling innovation, and addressing this challenge requires the involvement of all stakeholders.

Banifatemi then presented a common platform for AI for good, which would facilitate the collaboration between AI practitioners and ‘problem owners’ (governments, civil society, domain experts, etc.) and provide solutions in a systematic manner, moving beyond pilots and individual projects. Mr Stuart Russell (Professor of Electrical Engineering and Computer Sciences, UC-Berkeley) added that this collaboration between problem owners and engineers, and the convergence of pilot projects into global services, was the main stumbling block identified by the AI + Satellites track. Projects often result in publications that are filed away, while real problems on the ground continue to persist. As this is a common challenge among almost all AI projects, we need to develop standardised ways of collaboration and ‘shepherds’ with experience to avoid the roadblocks that AI researchers are not equipped to anticipate. After all, AI for good is not just a technical issue, but also has governance and sociological dimensions requiring different kinds of expertise.

Mr Trent ​McConaghy (Founder, Ocean Protocol; Founder & CTO, BigchainDB) presented a framework for AI Commons, which is a scalable, de-centralised platform that brings together problem owners, AI practitioners, and suppliers of data and infrastructure. The platform contains a variety of data sources, provides incentives to share data, includes privacy provisions, and has in-built mechanisms for data governance (e.g. permissions, labels, and ontologies) and interoperability. McConaghy concluded that the SDGs are a great way to summarise global problems, and a high-level way to approach them with AI would benefit from a common platform, which is not just something that can hypothetically be built, but that is already in the process of being constructed.

Ms Francesca Rossi (Research Scientist at IBM Research and Professor at the University of Padova) highlighted the need for public involvement in creating AI, as AI will impact everybody. Besides practitioners and problem owners, it is important to include researchers, social scientists, data subjects, and policymakers. In addition, they need to be representative of different cultures, genders, disciplines, and stakeholders. Rossi emphasised the need for trustworthy AI, which should take into account fairness, values, explainability, and ethics, and which needs to collaborate with existing initiatives around AI ethics and trust.

Mr Chaesub Lee (Director of Telecommunication Standardization Bureau, ITU) closed the session by highlighting the urgency of working towards AI for good, as AI technologies risk being hijacked by those using it with bad attentions. In addition, he reiterated the identified need for smoother transitions from pilot projects to global services.

This session took stock of the role of data across the breakout themes discussed over the previous days, and it proposed a common framework for the way in which data can addressed in the context of AI for good. The moderator, Mr Amir Banifatemi (AI Lead at XPRIZE Foundation) introduced the session and passed the floor to Mr Omar Bin Sultan al Olama (Minister of State for Artificial Intelligence, United Arab Emirates).

Bin Sultan al Olama noted the importance of having gatherings like these, as it is only through collaborating and unifying resources and knowledge that we can obtain the benefits of AI. He furthermore mentioned the Global AI Governance Forum, established by the United Arab Emirates, which brings together AI experts to discuss how to govern AI to be able to reap its benefits while avoiding its potential negative impacts.

Next, Mr Urs Gaser (Executive Director of the Berman Klein Center for Internet & Society at Harvard University) explained that ‘AI for good is only possible when we have data for good’. A team of rapporteurs tracked the conversations in the plenary and breakout sessions of the conference to distill common themes related to data and work towards a first framework while building towards data commons for AI for good. This framework provides a horizontal view across the vertical tracks of the conference, and consists of six layers:

Rapporteurs from each of the tracks provided examples of ideas that fit within the framework:

AI + Satellites: there is a need for more on-the-ground data that is standardised and geo-referenced to be paired with satellite data.

Trust in AI: there is a need for greater transparency so that those who use the data know how, when, and by whom the data was collected. For example, labels food nutrition labels, readable by humans or machines, which help to prevent using the data in inappropriate ways or ways that introduce unintentional bias.

AI + Smart Cities and Communities: there is a need to gather data from experiments and best practices, publicly accessible, so that everyone can take part in the ethical design of solutions to urban and community problems.

AI + Health: there is a need for transparency in diagnostic e-health tools, which are sometimes used as ‘substitute doctors’: how to they arrive at their decisions?

Gaser added that ideas, projects, and best practices could be categorised in a common analytical framework, to be able to understand what works best in what context.

Banifatemi then moved the discussion towards private and public data: what should be open and what should be controlled? Bin Sultan al Olama suggested that the answer to this question needs to be made in consultation with citizens, integrating their preferences related to the collection and use of their data.

Asked about ways to standardise data, Mr Chaesub Lee (Director of Telecommunication Standardization Bureau, ITU) noted the large variety of data types that can be distinguished, raising questions related to their interoperability. These questions are currently explored by an ITU focus group. Furthermore, he voiced his concern related to the exchange of data and the lack of transparency of how much data is shared, and with whom. While data sharing is essential for smart operations, it requires adequate protection. In addition, one of the participants in the audience suggested that we need to think about making the potential of AI available for all, preventing skewed distributions of its benefits.

In their concluding remarks, Bin Sultan al Olama emphasised the utility of platforms like the UN to push countries towards sharing data, and Lee added that it is the role of UN agencies to ensure the use of data for good. Gaser highlighted the importance of powerful narratives that demonstrate the potential of unlocking data silos. Finally, the rapporteurs of the four thematic tracks stressed the continued importance of qualitative data, of building a community around data commons, of demystifying stories behind AI, and of working on applied problems rather than abstract concepts.

Mr Kenny Chen (Innovation Director Ascender) moderated the debate which focused on sharing key lessons from the four tracks of the conference.

Dr Stuart Russel (Professor of Computer Science, University of California, Berkeley) summarised the ‘AI + satellites’ track by highlighting four broad areas of projects: a) predicting deforestation before it occurs, b) tracking livestock to reduce cattle raiding, c) implementing capabilities to ensure micro-insurance, and d) providing an infrastructure platform to deliver continuous, permanent global services based on the autonomous analysis of satellite data. He also stressed that while there are many laudable pilot projects, there is a gap between these projects and the availability of the services to a majority of people, on a global scale. Hence, in order to ensure an easier transition from pilot projects to global services, he suggested to build one single platform to facility this.

Dr Ramesh Krishnamurthi (Senior Advisor at the World Health Organization) summarised the findings from the ‘AI + health’ track. He outlined four work streams which include a) AI for primary care and service delivery, b) outbreaks, emergency response, and risk reduction, c) health promotion, prevention, and education, and d) AI health policy. He then described the 15 projects that the group discussed throughout the conference: AI to detect vision loss, detection of osteoarthritis, AI and digital identity, AI based health portal, AI-powered health infrastructure, AI-powered public health messaging, AI-powered epidemic modelling, Malnutrition detection based on images, child growth monitoring based on AI, strengthening the coordination of AI-powered resources, AI to improve the predictive abilities based on EMR data, Ai for public health in India, pre-primary care with AI, AI-powered snake bite identification for first responders, and AI-based social media mining to track health trends.

Dr Renato de Castro (SmartCity Expert) summarised the ‘AI + smart cities and communities’ track. He highlighted three key areas of this track. First, AI used for urban solutions should give voice to citizens in order to co-create their cities. It should also counter harassment and abuse. Second, AI should be used to foster smart governments. Examples of this came from Amsterdam and Brazil and de Castro stressed that the experience of Amsterdam shows that being allowed to fail and learning from failure is a very important feature. Third, AI can be used to empower smart citizens. Many examples came from Barcelona which focuses on using AI to empower people, not to replace them. Overall de Castro stressed that it is important to not only focus on cities but also the regions surrounding these cities. This was an important lesson from considering the African context where it is crucial that benefits are shared across the region so that citizens can benefit without moving to the city.

Also speaking about the findings of the ‘AI + smart cities and communities’ track, Mr Alexandre Cadain (CEO at Anima, Ambassador AI XPRIZE) identified some of the key questions and challenges ahead. First, he argued that it is important to counter the fear and the risk that all smart cities will eventually look alike. Tailored solutions are important that recognise history, cultural heritage, and linguistic diversity. Second, it is also important to get away from a top-down approach and to begin to view citizens as the problem owners who can identify areas of need and possible solutions. Third, connections and knowledge sharing between the emerging smart cities is needed and as such an ‘Internet of cities’ might be needed.

Dr Stephen Cave (Executive Director of the Leverhulme Centre for the Future of Intelligence, University of Cambridge) summarised some of the findings of the ‘Trust in AI’ track. He outlined four crucial tasks for the future: addressing gender imbalances, reaching marginalised communities, addressing structural inequalities, and decolonising AI. And he identified three important themes of the ‘Trust in AI’ track. First, developers must earn the trust of stakeholder communities that are affected. Second, there is a need to build trust across borders. Third, AI systems must be demonstrably trust worthy. In addition, he highlighted that broader outcomes of the track include the realisation that the idea of trust and trustworthiness needs to be interrogated in order to find a common frame of reference; the importance of recognising cultural differences; and the importance of recognising and fostering diversity.

Dr Huw Price (Professor of Philosophy at the University of Cambridge and Academic Director of the Leverhulme Centre for the Future of Intelligence, University of Cambridge) and Dr Francesca Rossi (Research Scientist at the IBM T.J. Watson Research Centre and Deputy Academic Director of the Leverhulme Centre for the Future of Intelligence, University of Cambridge) emphasised that it is important to create and use synergies and enable everyone to be aware of and learn from existing projects. In order to achieve this, they introduced Trustfactory.ai, which they envision to address some of the concerns that the track has discussed.

During the Q&A, Mr David Jensen (Head of Environmental Cooperation for Peacebuilding Programme at UNEP) mentioned the ‘planetary dashboard for global water monitoring’, which is a new partnership between UN Environment, Google, JRC, ESA, and NASA. The Q&A also raised the important question of how to meaningfully engage with GAFA (Google Apple, Facebook, Amazon), which was addressed with a reference to creating diversity and implementing multistakeholder approaches.

Mr Frits Bussemaker (Chair, Institute for Accountability and Internet Democracy), acting as moderator, opened the session and explained that the aim is to showcase examples of how countries and organisations are approaching artificial intelligence (AI).

Dr Ahmed Al Theneyan (Deputy Minister for Technology Industry and Digital Capacities, Ministry of Communications and Information Technology, Saudi Arabia) started his intervention by underlining that technology is a key enabler for development. This is why Saudi Arabia has elaborated a comprehensive digitalisation strategy, which is built on several pillars: building resilient infrastructures to support all new technologies, developing the digital skills of the population (with a focus on youth), supporting innovation and entrepreneurship (through, for example, facilitating access to open data), and developing efficient electronic government services. The strategy focuses on promoting sustainable cities and communities, citizens’ health, decent work, economic growth, and gender equality, among other issues. Against this backdrop, the country is exploring the use of AI in innovative, responsible, and ethical ways, while supporting the development of this technological field, through key enablers: governance and legislation, investments, talent, and innovation.

Al Theneyan underlined that Saudi Arabia places a high importance on revamping the education system to match the technological progress: digital competencies are introduced in the curricula of primary schools, while universities create dedicated programmes and career paths focused on new technologies such as AI. The overall objective of these initiatives is to prepare the young generation for the skills needed in the future. In addition, Saudi Arabia has understood the importance of empowering more women to take active roles in technology fields. Its goal is to double the number of women in information and communication technologies (ICTs) over the medium term, and several programmes have been launched in this regard, mostly in collaboration with universities. Aiming to become one of the most attractive destinations for innovators and entrepreneurs, the country is building a network of innovation centres and tech accelerators to support these goals.

Amb. Amandeep Singh Gill (Permanent Representative of India to the Conference on Disarmament, and Member of the Task Force on AI for India’s Economic Transformation) argued that, while there is value in exploring the notion of beneficial AI, we should keep in mind that technology has multiple purposes and can be repurposed. He then went on to outline the work carried out by India’s Task Force on AI. Tasked with determining India’s vision with regard to AI, the task force reached several conclusions: AI should be treated as a tool for problem solving at scale; the country’s governance approach should be agile, sensitive and rooted in real needs; and enablers and safeguards should be put in place to avoid a backlash against AI, which would set the country back many years. Such enablers include expertise and awareness on AI, a positive social attitude and trust in AI, data literacy and policies for the proper use of data, and leveraging indigenous digital assets and local case-use scenarios. The task force took the approach of cautious optimism to the overall impact of AI on jobs, and outlined the need for AI to be transparent, explainable, and auditable. It also recommended the creation of a National AI Mission to coordinate AI-related activities in India and build public-private partnerships around concrete AI projects.

Singh Gill concluded his intervention by noting that collaboration and investments are key to supporting a country’s efforts to advance in the field of AI. Investments must be interdisciplinary, and all stakeholders need to be able to contribute to defining governance frameworks for AI.

Mr David Li (Founder, Shenzhen Open Innovation Lab) discussed the open nature of AI innovation. He noted that innovation is driven by access to knowledge, technology, and means of production. If these three elements are in place, innovation ‘can happen in the street’, and one does not have to be a large company to be able to innovate. To illustrate the concept of ‘AI from the street’, Li gave several examples of projects, such as an initiative focused on the development of machine translation tools within the framework of a school which teaches programming to refugees, and a Tibetan Buddhist centre dedicated to teaching young monks about digital technologies.

According to Lee, beneficial AI will not necessarily come from the Silicon Valley or from Shenzhen, but rather, from every street corner where people leverage resources to help their neighbours and communities. This is why we should see AI as a global resource and encourage people to use AI to create solutions to the problems faced by their communities.

During the moderated discussion that followed, a point was made that a smart city is much more than technology, it is community, common space, and governance. AI’s potential lies in its ability to bring people together around problems and problem solving. The success of smart cities will depend on three main elements: a right understanding of the technology, good collaboration among multiple stakeholders, and smart investments. Technology in itself will not deliver solutions, unless the environment around it enables this. Another concluding remark was that AI needs to be designed in such a way that there is transparency and understanding around it. We need to ‘take machines to schools and to streets’, to make everyone feel like they are part of the AI evolution. This, combined with proper governance and collaboration, will allow opportunities to be leveraged and risks mitigated.

In his introduction to the session, Mr Houlin Zhao (Secretary-General of the ITU) highlighted the ITU’s connection to space and the relevance of space for telecommunications, and expressed the ITU’s commitment to opening new opportunities for space exploration. He quoted Valentine Tereshkova, the first woman in space, who said that ‘a bird cannot fly with one wing only’ and reminded the audience of the Chinese saying that women hold up half the sky. Thus, he stressed that the active participation of women is needed in space exploration.

The first speaker, Ms Anousheh Ansari (Member & Chair of Management, ​XPRIZE Foundation Board of Directors, Space Ambassador), shared her very personal experience of becoming the first female private space explorer. As a young girl growing up in Iran, becoming an astronaut seemed impossible. Yet, after successfully selling her own company and starting to work with the XPrize Foundation, she fulfilled her dream by going to the International Space Station. In her speech, she also stressed the importance of democratising space, a key aim of the XPrize Foundation.

Ms Liu Yang (Pilot, Astronau​t, and first Chinese woman in space) reflected on her own experience of becoming an astronaut and working on the Chinese space station, Tiangong-1. She argued that artificial intelligence (AI) is crucial for anticipating developments in space and supporting future (human) missions. While the essence of human space exploration will be improved through AI, the human astronaut can never be replaced.

Ms Samantha Cristoforetti (Astronaut, Pilot, and first Italian woman in space​) shared her childhood experience and journey to becoming an astronaut. She then added reflections on AI and space exploration and stressed that ‘AI is pervasive in everything we do in space’. She mentioned for example satellite data and that European Space Agency is interested in leveraging the potential of AI to make this data more usable. She also pointed out that robotic precursor missions will precede human missions to the moon and eventually to Mars.

All three women were presented with the World Telecommunication and Information Society Day Award by Zhao. In addition, Zhao awarded an ITU 50-year medal for his contribution to the ITU to Dr Marko Jagodic.

Dr Jess Whittlestone (Postdoctoral Research Associate, Leverhulme Centre for the Future of Intelligence, CFI, Cambridge) spoke about bridging the policy-technical gap for trustworthy AI. She stressed the importance of policy in shaping the way technology is used and the environment in which it is used.

She argued that AI policy-making is different from policy-making in other areas of science and technology, because it needs to be much more focused on anticipating challenges. The pitfall related to this is two-fold: policy should not be too reactive, at the same time, it should not fall victim to the hype.

Whittlestone suggested that broad and general thinking is needed that recognises the complexities of the societies and the environments in which technology is used in order to establish policy. In order to achieve this, inputs from a wide range of stakeholders are needed. While technical experts cannot answer these questions alone, it is also obvious that there are few senior policy makers with the necessary technical expertise. These two communities need to improve their communication and tackle the challenges arising from the very different languages they speak. In this regard, we also need to ask what level of technical understanding is needed for policy makers to be able to ask and answer the right questions.

Whittlestone suggested a number of ways to bridge the policy-technical gap: digital and technical training for policy makers, digital coaches for members of parliament (MPs), data ethics frameworks within governments, and scientific advisors in government.

She also stressed that terms such as trust, fairness, privacy, and transparency mean different things to different groups of people and are discussed in a variety of ways in relation to technical solutions. It will be important to connect the various communities to bridge the gaps in mutual understanding.

The next speaker, Dr Rumman Chowdhury (Senior Principal of AI, Accenture) spoke about ‘Trustworthy data: creating and curating a repository for diverse datasets’. She highlighted that in a number of cases, biases already come in at the stage of data collection. For example, AI that engages in natural language training based on broad input from the Internet often results in sexist AI. Similarly, because of a lack of diversity in the data sets that are used for training facial recognition AI, this AI often works best for white and male persons while struggling with the rest of the population.

As one solution, Chowdhury and her collaborator suggested building a repository for open data. Data scientists need to rely on ‘what is out there’ and the caveat with open data and ‘available data’ approaches is to convince people to make part of their data open. In order to work towards the repository, trust building and ethical principles need to be built into the process from the very beginning. Consent is of course an important aspect. However, with the rapid developments in AI, she argued that complications arise if people are asked to consent for their data to be used for purposes yet unknown.

Chowdhury and her collaborator argued that the question of what trustworthy data is, does not have an easy answer. However, they noted that the AI hype sometimes leads to researchers and developers disregarding the basic principles of data collection. Similarly, they stressed that data collection is impacted by policies. Changes in policies can change the data available and lead to further biases in the algorithm which then needs to undergo several further development iterations before it yields useful outcomes.

Chowdhury also stressed that bias can come from other sources, not just data, but also the data scientists. This includes collection biases, measurement biases, and contextual society biases. In the Q&A part of the session, Chowdhury and her collaborator stressed that the focus is not on creating non-biased data, which is impossible given how contextual bias is.

Dr Krishna Gummadi (Head of Networked Rese​arch Systems Group, Max Planck Institute for Software Systems) focused on the question of assessing and creating fairness in algorithmic decision-making. He used the example of algorithms that are used in the US justice system (such as COMPAS) to assess the likelihood of relapse into criminal behaviour. These algorithmic predictions then play a role in making decisions about granting bail.

Gummadi and his collaborators were interested in perceptions of fairness in relation to these algorithms and conducted surveys with people affected as well as the general population. In broad terms, perceptions of what is fair were similar among respondents. However, differences came in with regard to the relevance and reliability of some of the questions. For example, there was no agreement among those surveyed whether the criminal history of parents or the behaviour in the defendants’ youth should play a role in the assessment. The survey also showed that the causal mechanisms between these facts and the likelihood of relapse was assessed in diverse ways. One interesting finding of Gummadi and his collaborators is that differences in political position (liberal vs. conservative) leads to differences in the extent to which behaviour is viewed as volitional or as a feature of the environment and social group membership.

One conclusion is that it seems difficult to find agreement among survey respondents on the causal mechanisms that underlie algorithmic decision-making in this example. This raises the question to what extent we can actually settle societal disagreements in moral reasoning in order to build algorithmic decision-making tools.

Can artificial intelligence (AI) help predict the spread of diseases? Can machine learning help respondents to better allocate resources in emergencies? These questions were raised by the moderator, Mr Dominic Haazen (Lead Health Policy Specialist, World Bank) to introduce this session, which addressed the potential of AI in the context of epidemics and emergency response.

Mr Ingmar Weber (Research Director for Social Computing, Qatar Computing Research Institute) explored the potential of social media to provide targeted advertising for public health campaigns. Whereas social media is currently predominantly used by public health agencies to broadcast messages, often ‘preaching to the choir’, there is potential to adapt messages to different groups, for example based on age, gender, marital status, location, education level, or interests. While bearing in mind privacy concerns, this allows for the distribution of the right message to the right person at the right time in a very cost-effective way.

Ms Jeanine Vos (Head of SDG Accelerator, GSMA) highlighted the potential of mobile big data to accelerate impact in the context of the sustainable development goals (SDGs), as it can create powerful insights about the location and movement of populations. For example, mobile data could detect the movement of internally displaced persons after an earthquake or the spread of a disease, especially when combined with other sources of data. GSMA’s Big Data for Social Good project explores these opportunities and places them in a consistent framework of best practices.

Ms Clara Palau Montava (Technology Team Lead, UNICEF) presented some of the work of UNICEF’s innovation unit. For example, responding to the 2015 Ebola crisis, it launched an open source messaging platform and worked with mobile operators to detect patterns of movement, extrapolating the spread of the epidemic. In the context of the Zika crisis, the agency combined various data sources, such as mosquito prevalence, poverty, and weather data, to estimate the disease’s dynamics. Montava emphasised that there is a continued need for scientific studies to better understand the bias behind these methods, especially if they are to be combined with machine learning. In addition, innovation in emergency response requires collaboration among organisations, and cannot be done by one agency alone.

Ms Anita Shah​​ (Managing Director of the Kenya office of Kimetrica) presented the Method for Extremely Rapid Observation for Nutritional Status (MERON), aimed to detect malnutrition in children based on facial recognition technologies during humanitarian emergencies. Traditional methods to measure the nutritional status of children in emergency settings are plagued with a number of challenges, such as the skills needed by the researchers, the bulky equipment that needs to be transported, and the degree of physical contact between the researcher and the child. The method can assist in timely identifying the children who are in need of nutrition support. The project is intended to scale up and be tested in different countries and emergency contexts.

Mr Marcel Salathé (Professor & Head of the Digital Epidemiology lab, École Polytechnique Fédérale de Lausanne (EPFL)) explained how health trends could be tracked using crowd-sourced social media data combined with machine learning. EPFL’s ‘Crowdbreaks’ monitors patterns in diseases in real-time across countries by collecting tweets with keywords that could be relevant to specific health issues. The algorithm is continuously updated with newly labelled tweets and feedback from users. As the application of AI to such projects often involves many actors, Salathé emphasised the need to harmonise the diverse incentives of different entities, adding that the failure of some projects is not due to a lack of ‘good’ incentives, but rather, due to their misalignment.

Mr Jochen Moninger (Head of Innovation, ​Welthungerhilfe) focused on the potential of detecting child malnutrition using AI. He pointed out that nutrition during a child’s first five years is crucial for its development, and while there is enough food in the world, it is not well-distributed; ‘we don’t know where to bring it’. Welthungerhilfe’s tool is developed to identify malnutrition through a mobile app that uses augmented reality in combination with AI, which can calculate someone’s weight and height through a 3D scan. This allows for rapid response to areas where malnutrition is prevalent, as it is of vital importance that action is undertaken swiftly due to the negative impact of children’s sustained malnutrition on their further lives.

Mr Frederic Werner (Senior Communications and Membership Officer, International Telecommunication Union (ITU)) moderated this session. He highlighted some key areas of focus for the discussion: connecting those with a good understanding of the situation on the ground with artificial intelligence (AI) experts, making AI relatable for people with a non-technical background, and working towards the sustainable development goals (SDGs) with AI.

Dr Aimee van Wynsberghe (Co-Founder and Co-Director of the Foundation for Responsible Robotics) focused on ethics as a driver for innovation and explained that ethics relate to ideas of what we consider the good life, and help us distinguish between right and wrong, and good and bad. She emphasised that taking ethical considerations into account when designing new technology and especially robotics, should not be seen as a hindrance, but rather as a way to push engineers and developers a step further.

Wynsberghe argued that we should not perceive technology as being neutral. On the contrary, she pointed out that technology is creating and co-creating our societal norms, values, and meaning. Technology could change the elements of what we think is constitutive of the good life. In fact, technology already shapes how we get to what we perceive as the good life, such us how information and communications technology (ICT) helps us bridge geographical distances and helps us connect with friends and family. Building on this, we see how robotics and AI can change our perception of what constitutes a ‘good life’ and how we can achieve it. Building on this, Wynsberghe suggested that ethical questions surrounding robotics and AI can be clustered into three main categories: regulations, users, and technology. Regulations touch on questions such as: What are the standards of training AI? How can we make robots that enhance rather than replace humans? Ethical considerations surrounding users include questions such as: How do users perceive robots? What could human-robot interactions do to human-human interactions?

How should users be obliged to act towards robots? And last but not least, a key ethical question relating to the technology itself is whether or not robots and AI are (ethical) agents in themselves.

Mr Maurizio Vecchione (Executive Vice-President of Global Good and Research, Intellectual Ventures) focused on the role of technology in saving humanity and argued that population-scale problems need to be put into sharper focus. In order to do so, he argued, various disciplines need to work together and solutions need to recognise the complexities on the ground. One specific example Vecchione gave to illustrate the vast potential of technology, relates to the so-called small holder innovation paradox. He argued that it is generally recognised that agriculture is a way out of poverty in low income countries. Yet, in many cases, the agricultural activities do not yield enough productivity. Here, better data combined with AI can produce analytics on soil and the environment and give predictions on crop yields and planning and advice. Similarly, it can contribute to much needed financial services. The data and services can easily be accessed via smart devices and can allow small holders to improve productivity and access new opportunities.

Dr Francesca Bria (Chief Technology and Digital Innovation Officer, Barcelona City Council) spoke about her work and gave concrete examples from the city of Barcelona. She emphasised data commons and ethical digital standards to solve urban challenges with clear rules and democratic control. She also stressed opportunities for collective action and citizens taking control and ownership. The role of the city is crucial for the future of AI for good. There are opportunities for the bottom up empowerment of citizens and re-thinking of technology to serve the city.

Bria stressed that in order for data and AI to serve the common good, we need to create trust and ownership. This means focusing on transparency regarding data and the algorithms used. This also includes offline and online consultations with citizens, and giving citizens control of their data. Data needs to be treated as a commons and a new legal regime of data ownership needs to be created. She suggested using blockhain technology and attribute-based cryptography in order to give control back to citizens and to allow them to decide what data is private and what data can be shared and become a common good. She thus advocated that citizens regain data sovereignty.

The session was opened by Ms Claire Craig (Director of Science Policy, the Royal Society), who explained that trust is an issue which crosses the boundaries of nations and countries; different cultures may have different understandings of the notion of trust, and it is important to understand these differences to be able to develop trusted applications.

Mr Liu Zhe (Professor, Peking University) spoke about cultural differences when it comes to trust in artificial intelligence (AI). Trust in AI, he argued, must be considered in the context of existing technology and possible progress in the foreseeable future; basing the discussion on science fiction is a dangerous thing. Zhe then went on to discuss the issue of scoping the problem of trust in AI and robots.

He mentioned that in China and other Asian cultures people seem to be enthusiastic about AI and other emerging technologies. This may lead to an over-trust in technology, which involves a certain deception in the interaction between humans and technology. The risks of over-trust and misplaced trust are very high and we need to address such risks when we think about the relation between humans and AI and robots.

She emphasised the importance of making a distinction between mistrust and misplaced trust or over-trust. He then explained that, when we think about the notion of trust, we consider it largely from the perspective of personal relations. But is it appropriate to look at the relation between human and technology as some type of interpersonal relation? Should we insist on using trust as an appropriate framework to conceptualise our relation to beneficial AI? If not, what is the alternative?

Answering a question from the audience about how we can measure trust, Zhe noted that, before measuring, we should understand the relationship between humans and AI, and what it entails, In his view, it is not clear whether we should use ‘trust’ as a framework to assess this relation. In a follow-up comment, a participant asked whether trust in AI is not a question of trust in other human beings (i.e. the programmer or the engineers building the application, the company, the government, etc.) rather than a question of trust in technology itself. The same goes when we talk about ethics in AI: the discussion is about ethics in how the engineer designs the system.

Ms Kanta Dihal (Research Project Coordinator, Leverhulme Centre for the Future of Intelligence, University of Cambridge) presented the AI Narratives project, which focuses on examining the stories we tell about AI and the impact they have on the technology and its use. The goal of the project is to understand the hopes and fears that shape how we perceive AI, and the relationship between our imagining of the reality and the technology itself.

Dihal spoke about the fact that the impact of AI will be global, and, because of this, managing AI for the benefit of all requires international and multidisciplinary co-operation. But different cultures see AI differently. To build trust across cultures, we must understand the different ways AI and what it could do are perceived.

She also pointed out that there might be limitations in the way we talk about AI; for example, we might be distracted from the real problems by science fiction, fantasies, and the fear of ‘killer robots’. The narratives of rebellion seem to significantly impact our fears about intelligent machines. And this reveals a paradox: we want clever, ‘superhuman’ machines that can do things better than us (and for this we entrust machines with human attributes like agency and intellect autonomy), but at the same time we want to keep them ‘sub-human’ in statute. The perception of AI is influenced by both fiction and non-fiction, and this creates a goal-alignment problem: whose values and goals are actually represented in the development of AI?

Mr David Danks (Department Head and Professor of Philosophy and Psychology, Carnegie Mellon University) and Ms Aimee van Wynsberghe (Co-Founder and Co-Director, Foundation for Responsible Robotics) presented their project on ‘Cross-national comparisons of AI development and regulation strategies – the case of autonomous vehicles’. Danks spoke about the fact that sometimes, when we think about trust, there is a feeling that we are not sure what we are talking about. However, he noted that trust is a very well understood notion and there is no need to reinvent the wheel. When we speak about trust and technologies, there are several important questions to consider: What do we expect from technologies? How do we make ourselves vulnerable through the use of technology? And how we do we find a middle ground?

We can think of trust in two ways. On the one hand, we have behavioural trust, based on reliability, predictability, and expectation grounded in history. This kind of trust is useful, but it can be fragile. On the other hand, we have trust grounded in the understanding of how the system works. This is the kind of trust we have in one another, and is based on our knowledge of people’s values, interests, etc. This trust is helpful because it can be applied to novel situations. Danks gave the example of how pedestrians in the city of Pittsburg, USA (where Uber used to heavily test self driving cars) interact with self-driving cars. There are many cases of people jaywalking in front of self-driving cars. When asked why they do this, they often say that they trust the car would stop, because they have seen other cars stopping when other pedestrians jaywalked. This is behavioural trust: the pedestrians trust the technology because they have seen it function a number of times.

Giving a brief overview of the project, van Wynsberghe explained that the aim is to explore the ways in which different states regulate AI technologies, and how these regulations impact the notion of trust. The project also looks at the differences between regulations and cultural norms across various countries. The hope is to be able to use the results of the project as a starting point to more systematically understand various best practices in terms of technology, regulations, and social norms.

The session concluded with an emphasis on the need to facilitate a better understanding of the interactions with AI and robots. In the case of self-driving cars, for example, mechanisms that indicate to pedestrians when a car is on autonomous mode could improve this understanding.

The session AI Fostering Smart Government was moderated by Mr Frans-Anton Vermast (Strategy Advisor & International Smart City Ambassador, Amsterdam Smart City). He started with an introductory speech about Amsterdam Smart City, its structure and purpose. Its team is currently focusing on digital transformation and social inclusion, and how to enhance public trust in local governments, as well as the accountability of private tech companies operating in the public sphere. He argued that the digital city is inclusive: data and technology do not have to constitute limitations for people. Furthermore, the smart city approach includes the following concepts: inclusivity, control, the need to be tailored to the people, legitimacy, openness, and finally, it has to be by everyone for everyone. Vermast also talked about the work of their DataLab meant to create innovation through competition. Their next project will be focused on the opening of algorithms. The final principle he talked about was the possibility for citizens to choose algorithms in accordance with the concept of ethical integrity.

MsCarla Dualib (Secretary of Communication and Press at Diadema City Hall, Brazil) talked about Diadema Open Evolution's work in engaging Brazilian cities to use artificial intelligence (AI) for the benefit of people. Furthermore, she talked about the project ‘House Beth Loba’, a project for protecting women that are subject to violence, and explained that AI can enhance the collection, analysis and mining of data collected through the project which can be used for social benefit. The Internet of Things (IoT) and AI are the core interests of Diadema. Finally, she stressed that it is crucial to talk to people using emerging technologies, to make them aware of how these technologies work and what they imply.

Mr Renato de Castro (SmartCity Expert) structured his speech around the current evolution of technologies for smart cities and how the general understanding of smart cities should be questioned. Smart cities are not just for big cities. He went on to give some examples to show how AI can be used for addressing local issues. The first project he talked about was implemented in Brazil, in small villages faced with drought. IoT helped tackled the situation by providing better tools for weather forecasts and for sending alerts to citizens. The second example he talked about stressed the concept of public-private-people partnerships. The inclusion of people will indeed increase with the implementation of smart cities.

After the panellists’ speeches, the moderator opened the floor for discussion. Dualib commented that smart citizens are needed for smart cities. De Castro followed up by adding that the United Nations and its own agencies should start speaking the language of smart cities in order to have a proactive role with global impact. On the question of defining ‘smart government’, de Castro argued that the concept is new for a lot of countries, thus, a starting point is to gather the best practices around the world. Dualib proposed creating an open international community-driven platform for sharing information about various initiatives and serving as a repository of ideas. Finally, the moderator added the need to share the lessons learned, for a more constructive strategy.

One question posed by the audience regarded best practices in data management and data ownership, in the context of the governmental duty of data protection. It was highlighted that data protection represents a challenge that needs a balanced regulation that does not limit the development of new technology. Another question was posed about the limitations in the making of infrastructure developing policies. De Castro argued that public-private partnerships are a key solution; however, big global entities are playing a new role challenging the creation and implementation of these policies. The concept of subsidiary was also proposed as a solution to the problem by Vermast. Finally, the last comment introduced the concept of fairness, accountability and transparency (FAT) to be used in enhancing the ethical integrity of algorithms. De Castro called upon academia to take on a role in this regard.

The first speaker, MrJoaquin Rodriguez Alvarez (Professor and Researcher EPSI-UAB, Leading Cities Coordinator), said that technology had a lot of promise, but that it does not come without its share of problems. It is therefore necessary to be prudent when it comes to the development and applicability of artificial intelligence (AI). Furthermore, he stressed the importance of data management through two radical examples. First, he recalled the use of data during the Holocaust; second, he spoke about the role that data played in the extermination of a part of the population in Rwanda. He argued that it is not possible to completely trust either the private sector, nor the public one. Today, it is easy to manipulate the public opinion and negatively affect democracy with the tools we have available. It is crucial to be careful with digital technologies and AI.

Rodriguez Alvarez further stated that the concept of ’empowering people’ is based on the notion of sharing knowledge and awareness. Finally, he stressed that technology is not neutral, but that it ‘learns’ from the society it is installed in. The main concerns in this regard are related to, but not limited to lethal autonomous weapon systems (LAWS). Human dignity has to be taken into account in the development and application of these technologies: technology is used by humans who can have peaceful or hostile purposes.

MrJacques Ludik (Founder & CEO, Cortex Logic; Founder & President, Machine Intelligence Institute of Africa (MIIA)) focused on how to use technology for better development and problem-solving. The talk covered the topics of health, water, smart education and smart technology services for African smart cities​. He first discussed the inclusion of AI in the community, and a data platform for Africa. On the issue of health data and analytics ecosystems for Africa, MIIA is collaborating with private companies in order to operationalise health systems. Ludik also touched upon the need for smart education in Africa, in order for Africa to be play an active role in the 4th industrial revolution. He then continued his intervention by attempting to define smart citizens as those who put responsibilities first. Finally, he concluded by saying that empowering smart citizens in a new way involves: bottom up decision making, non-linear approaches, encouraging complexity, embracing uncertainty, enabling and boosting creativity.

The moderator then opened the floor for the presentation of three projects that were then discussed by the panellists and the audience. The first project focused on empowering homeless people in line with sustainable development goals ( SDGs) 8 and 11. The project plans to provide smartphones through which people provide data that governments and international organisations can use for more effective actions. The second project focused on managing cars as a common good. One of the direct effects of the project is saving urban space by car-sharing. People can change their concept of mobility and the way they behave. Finally, the third project was about an inclusive innovation roadmap, meant to incentivise investments in AI and to include the following points: enhance city operations, connect citizens with the city government, implement open data, close the digital divide and make Internet access a public good. In line with the SDGs’ values, the project satisfies the P4 concept: people, planet, performance and place. In the final comment of the session, the opportunities of blockchain were discussed as a means of empowering citizens.

The breakout session on ‘AI + Smart Cities and Communities’ was opened by Mr Renato de Castro (SmartCity Expert) and Mr Alexandre Cadain (Co-Founder & CEO at ANIMA and XPRIZE Ambassador). Castro explained that the session would explore smart city design projects and the technology behind them, with a focus on how artificial intelligence (AI) impacts cities and communities.

In de Castro’s view, smart cities are built on five key components: the underlying technology (information and communications technology, big data, algorithms, etc.), the citizen-centric dimension, the goal of improving citizens’ lives, the emergence of new economies (such as the sharing economy), and the promotion of resilience. Smart cities are not and should not be about building new cities, but about building better cities to live in. To achieve this, it is important that smart cities projects move from a public-private partnership (PPP) approach to a public-private-people partnership (PPPP) approach, and involve citizens as equally important stakeholders in the development and implementation of such projects.

Cadain spoke about an evolution of ‘smart cities’ towards ‘intelligent cities’ and ‘ideal cities’. The ideal cities of the future could be cities in which there is a perfect harmony between humans and technology. Finding solutions for achieving such harmony is a task that requires collaboration across stakeholder groups and disciplines. He further noted that, when we look at successful smart cities projects, we should try to identify solutions and applications that could be replicated in other cities around the world. Moreover, we should consider how certain smart cities applications from developed countries could be replicated in cities in developing countries.

Mr Akihiro Nakao (Professor, University of Tokyo), acting as moderator and panellist, focused his intervention on 5G technology and its use in smart cities applications. Speaking about the importance of resilient communications infrastructure for the development of future smart cities, Nakao presented a project which combines 5G technology with AI to deliver real time video surveillance. Described in simple words, the project involves the use of drones, 5G technologies, and machine learning to capture and transmit real time feed of city surroundings, and analysing what happens on the ground through object recognition technology. Such a project could help improve city safety, which is a growing concern nowadays.

Nakao also touched upon issues related to privacy and data protection in the context of smart cities. He pointed out that companies do need data to be able to produce services that are beneficial for citizens, but that such services should be developed without violating privacy rights.

Mr Brian Markwalter (Senior Vice-President, Consumer Trade Association) started his intervention by mentioning that the Internet of Things (IoT) and AI technologies are continuously evolving, and while this evolution comes with opportunities (such as supporting urbanisation processes around the world), there are also challenges. Technology in itself might not necessarily evolve in a positive way, and this is something we should always keep in mind. Markwalter noted that, as people are already experiencing ‘AI technology coming to meet them and make their lives easier’ (through everyday applications such as smart speakers and other digital systems), they are increasingly expecting the same from their cities. And there are many areas in which cities can put AI technology to use to improve people’s lives, from transport and finances, to energy and the use of resources. When it comes to challenges that can impact the evolution of smart cities, Markwalter mentioned privacy and data protection concerns for end-users, and costs and return of investments concerns for the private companies and public entities.

Mr Chaesub Lee (Director, Telecommunication Standardization Bureau, International Telecommunication Union) spoke about the work carried out by the ITU-T Study Group 20 on IoT and Smart Cities and Communities. The group has been working on developing international standards that leverage the use of IoT technologies to address urban-development challenges. It has so far produced several key performance indicators (KPI) for smart cities, which have already applied been in cities like Dubai and Singapore to assess the performance of smart cities applications and identify areas for improvement. The group will also explore the use of AI for smart cities.

Lee also spoke about the UN initiative on United for Smart Sustainable Cities, which aims to encourage the development of public policies and the use of information and communications technologies (ICTs) to facilitate the transition to smart sustainable cities.

Lee further explained that there are multiple layers behind the notion of smart cities: infrastructures support communication, connected device produce data, collection of data through platforms, the platforms provide capabilities to develop services and applications, which are then provided to citizens in line with regulations and operational principles. All these layers have multiple needs (for example, the infrastructures and devices need to be interoperable, while the regulations need to be based on shared knowledge). By considering all these needs, we can improve the technology and the ‘quality of smartness’.

Cities are, by nature, distinct in terms of geographic location, history, citizen behaviour, culture, etc. The goal of smart cities should not be to create uniform cities, but rather devise ‘smart solutions’ that are adapted to the specificities of each city. The main challenge at hand, therefore, is how to apply AI and other technologies to individualised smart cities.

Mr Andrejs Vasiljevs (Co-founder and Chairman of the Board, Tilde) spoke about the need to consider language diversity in the development of smart cities. Nowadays cities are increasingly multilingual, and AI technology can be put to use to create multilingual solutions for inclusive smart cities and societies, which empower people. Machine learning technology has seen significant progress over the past years, and has helped advance translation of smaller and more complex languages, demonstrating that technology is ready to support all communities, irrespective of how big or small they are. In Latvia, for example, the government has deployed machine translation to allow people speaking different languages to access and use e-government services.

Vasiljevs also mentioned another example of AI used to promote more inclusive societies: chatbots or virtual assistants used by public bodies to facilitate interactions with citizens. He again gave the example of Estonia, where a virtual assistant, Una, is providing guidance and support to citizens who want to set up a company.

During the final round of discussions, several points were made:

Combining technology fields – such as machine translation and telecommunications infrastructure to deliver simultaneous translation – has the potential to advance smart cities.

Every city is different and this needs to be considered when technologies are standardised and applied.

When it comes to privacy and data protection, what matters most is that people understand what happens with their data. Data is essential for smart services, and we might not have smart services without sharing and using data. But it is essential that users are able to make informed decisions about the data they disclose and stay in control of their data. Data should be used with the need for privacy in mind.

The panel discussion on Building digital competencies to benefit from existing and emerging technologies with a special focus on gender and youth dimensions took place at the United Nations Office in Geneva (UNOG) during the 21st session of the Commission on Science and Technology for Development (CSTD).

The session was opened by Amb. Geraldine Byrne Nason (Chair, Commission on the Status of Women (CSW)). In her video message, Byrne Nason considered that the CSTD and the CSW share areas of common interest regarding the discussion on women’s rights and technology. Moreover, both the conference and the Commission share credit for creating a co-operative and collaborative discussion around the 2030 Agenda for sustainable development. She explained that the CSW is a space for addressing gender equality and is based on the Beijing platform of action (i.e. the Fourth World Conference on Women held in Beijing in September 1995). She also affirmed that technological innovation is a fundamental driver for integration, thus challenges faced by women need to be urgently addressed. ‘With the fast pace of the technological progress, there is an unprecedented urgence to close the gender gap’, she maintained. Byrne Nason concluded by considering that educating women on information and communications technologies (ICT) skills is crucial in achieving the sustainable development goal (SDG) 5 and more broadly, the 2030 agenda.

Ms Shamika N. Sirimanne (Director, Division on Technology and Logistics, Head of the CSTD Secretariat at UNCTAD) introduced the Report of the Secretary General on building digital competencies. She stressed that new technologies bring both opportunities (e.g. improve living standards) as well as challenges. ‘Development gains are not automatic but depend on the readiness of countries to adopt and adapt to such advancements’, she clarified. She then focused on the existing ‘mismatches’ regarding building digital competencies: by 2020 around 80-90% of jobs in the European Union will require ICT skills and by 2030 around 3-14% of the global workforce will need to switch to new occupational categories. Moreover, there is a considerable gender gap in the access to digital technologies: women are underrepresented in ICT specialised occupations, and in the science, technology, engineering, and mathematics (STEM) fields.

She concluded by illustrating strategies suggested for building digital competencies such as incorporating digital skills in the educational system, creating an enabling environment by investing in digital infrastructure, and promoting collaboration among different stakeholders.

Ms Miriam Nicado García (Rector University of Informatics Sciences (UCI) of the Ministry of Higher Education, Cuba) explained that since 1959, Cuba has progressed in education and science by creating a network of higher education centres comprising over 50 universities nationally. She stated that the UCI was created in 2002 to train professionals dedicated to informatics sciences. Overall, 40% of the students are women and they have the possibility to include a variety of subjects in their studies such as engineering, mathematics, and bioinformatics.

Ms Helena Dalli (Minister of European Affairs and Equality, Malta) considered that in Malta, women are underrepresented in STEM subjects. The gender gap existing in education is also reflected in the job market as women tend to be paid 30% less than men as ICT specialists, and they are represented with a margin of five to one compared to men in the same sector. Moreover, in Malta, men are twice as likely to become engineers than women and they have a five times greater chance to become software developers. She explained that Malta’s national digital strategy strives ‘to make the most out of the technological advancement’, with special attention to gender issues. This is why the government has taken several family-friendly measures aiming at increasing the presence of women in leadership positions in science and technology. She concluded the presentation by considering that ‘stereotyping is the most serious impediment to women moving forward in science’. That is why Malta voted in favour of the establishment of an international day of women and girls in science to be held on 11 February.

Ms Sophia Bekele (Founder & CEO of the DotConnectAfrica Group) first considered that ‘we grow because every day we endeavor to know’. She explained that the current state of the Internet and the latest technological developments ‘have revolutionised Africa’. Despite the fact that Sub-Saharan Africa has the highest number of female startups, there are still existing challenges that women entrepreneurs face while launching a start-up, such as lack of finances, product pricing and poor marketing. She explained that new technologies could significantly help small and medium sized enterprises (SMEs) in, for example, the development of green energy sources. She concluded by explaining that the DotConnectAfrica group aims at fostering the use of Internet of Things (IoT) devices and new technologies in Africa through a partnerships and mentorships programme, as well as complementary gender empowerment initiatives such as Miss.Africa.

The session was open by Ms Anja Kaspersen (Director, United Nations Office for Disarmament Affairs), who introduced the speakers.

Mr Wolfram Burgard (Professor of Computer Science, Albert-Ludwigs-Universitat Freiburg) started his intervention by noting that there is a need to transform the way we think about artificial intelligence (AI) and take a more positive attitude. AI is already a part of our lives and we see it in multiple applications, from web services and games, to manufacturing and agriculture. As the technology continues to progress, it is expected to play an increasingly important role in several areas. For example, highly accurate navigation systems empower industrial robots to move with more agility from one place to another, and, thus, enhance productivity. The same systems are crucial for companies working in the field of self-driving cars. In healthcare, big data, algorithms, and neural networks are used in multiple applications, from diagnoses for certain diseases to neuro-robots which help people with disabilities perform daily tasks. In agriculture, AI brings precision farming, supporting a more efficient and sustainable use of resources. These are only few examples which show that AI is an important tool for the well being of society.

Responding to a question from the audience about the risks associated with AI, Burgard acknowledged that one of the main challenge with AI agents is that they need to operate in a world that they do not fully know. Taking the example of self-driving cars, the technology needs to be able to take into consideration the environment in which it operates, and there is still much work to be done by researchers to empower algorithms in this regard.

Ms Terah Lyons (Executive Director, Partnership on AI) spoke about the work that the Partnership on AI plans to do to support the development of AI technology, which would benefit everyone. The partnership, which over 50 members from both private companies and non-profit entities, is intended to serve as an open multistakeholder platform dedicated to fostering discussions and a public understanding on the implications AI has on people and society, and to facilitating the development of best practices on AI technologies. Its members share the belief that AI holds the promise to raise the quality of peoples ‘lives, and to help humanity address some of its most pressing problems, such as poverty and climate change. The partnership will focus on six major areas of work: safety-critical AI; fair, transparent, and accountable AI; collaborations between people and AI systems; AI, labour and the economy; social and societal influences of AI; and AI and social good.

Lyons underlined the need for an active understanding of the challenges associated with the development and use of AI. These challenges can only be addressed in a multistakeholder and multidisciplinary manner, and this is also the case when it comes to developing policies and regulations in the field of AI. Moreover, it is important to start addressing these concerns now, if we are to be able to develop AI for the benefit of social good.

Ms Celine Herweijer (Partner, Innovation and Sustainability, PricewaterhouseCoopers UK) started by stating that the Earth has never been under so much strain, with many species being at the risk of extinction, the chemistry of oceans changing at a rapid pace, the air and water quality dropping, and climate change exacerbating. This is the backdrop against which the fourth industrial revolution is happening, and technologies such as AI can be put to use to address some of the Earth’s major challenges. For example, smart transportation systems are crucial for managing climate change, while precision agriculture allows for a more efficient use of natural resources.

It is in this context that the Fourth Industrial Revolution for the Earth initiative was started. It functions as a multistakeholder platform dedicated to developing a research base for applications for the Earth, supporting breakthroughs in this area, and building an accelerator platform to support projects and ventures to address the use of technology for the benefit of the Earth.

Herweijer noted that sustainability and responsibility principles need to be embedded into AI systems. It is also important to consider the risks of AI leading to bias and deepened inequalities in the early stages of developing AI applications. In addition, once developed and put to use, these applications should be monitored constantly so as to identify possible negative implications that may not have been considered during the development stage.

Mr Wendell Wallach (Consultant, Ethicist, and Scolar and the Yale University’s Interdisciplinary Center for Bioethics) spoke about the importance of looking not only at the benefits of AI, but also at the potential risks and undesirable consequences. He called for a distinction to be made between outwardly-turning and inwardly-turning AI for good. Outwardly-turning AI for good is about the potential of AI to help achieve the sustainable development goals (SDGs). But then we should also consider the impact of AI on areas such as decent work and global inequality, which are covered by the SDGs as well. While AI can help achieve the SDGs, it can also undermine our ability to achieve some of them. An inwardly-turning AI for good is about mitigating the harms that come with the progress of AI, and making sure that we do not go on a path we actually do not want. It is therefore important to look at both sides of AI for good, and devise technological and governance solutions to have an appropriate oversight over the technologies we develop.

In response to a question from the audience about whether we should focus more on issues such as rights and responsibilities for AI systems, Wallach pointed out that while such issues could be considered by researchers, we should focus more on the real challenges we have today. We should put more emphasis on the AI implications that are truly feasible and require immediate attention, and maybe less on those related to technologies we do not yet have.

During the discussions, a point was made that there is a mismatch between the adoption rate of AI technology and the ability to understand it. To address this, emphasis should be placed on issues such as audits for AI systems, the ethics of AI, and AI explainability. At the moment, many of the processes behind AI applications function as ‘black boxes’, and it is not clear how they make certain decisions or reach certain conclusions. While work is being done to make algorithms more explainable, we might need to live with the fact that humans might not be able to understand some systems. In such cases, it is important to carefully assess the risks of such systems during the development phase, test them in simulation environments, and continue to monitor them while in use, to be able to correct possible negative implications.

The session ended with a discussion on education systems and the need to adapt them to an increasingly AI-driven society. Investments are needed to enhance education systems and make sure that they prepare the needed amount of AI engineers and data scientists. At the same time, the nature of education needs to be changed, so AI is taught from a multidisciplinary perspective, combining, for example, technology with ethics. Re-training the current work force is also an important element to be considered, especially given the fact that AI progress leads to some jobs being made obsolete.

Kaspersen concluded the session by stating that the biggest transformation brought about by AI is about us, humans, and about how we adapt, evolve, govern, and educate ourselves and the world we live in.

The AI for Good Global Summit 2018 was opened with a keynote speech by Sir Roger Penrose (Emeritus Rouse Ball Professor of Mathematics, University of Oxford). Based on his experience in physics, mathematics, and philosophy, Penrose addressed the question of Why Algorithmic Systems Possess No Understanding.

Artificial intelligence (AI) has advanced tremendously, and these developments have coincided with questions of whether – or when – AI will reach the level of human intelligence. Penrose compared AI to the cerebellum; the part of the brain that receives input from sensory systems and integrates them to fine-tune movement, coordination, precision, and timing, and he contrasted the cerebellum with the cerebrum, which initiates and coordinates activity in the body. Penrose explained that the relation between the cerebrum and cerebellum is akin to that between the programmer and the program.

According to Penrose, just like the cerebellum lacks an understanding of why it does what it does, computers are unlikely to encapsulate understanding or consciousness any time soon. The gap between algorithmic computation and general understanding is visible in quantum physics, and Penrose quoted the example of Schrödinger’s cat to highlight the discrepancy between computational outcomes (the cat is both alive and dead) and understanding (the cat is either alive or dead).

So how can we conceive consciousness and understanding? Penrose suggested that it must be rooted in physics, and may be explained by microtubules, which are tiny tubes located within brain neurons. According to this theory, the fine-scale activities of these microtubules form the building blocks for consciousness. In the absence of these biophysical elements, computers are unlikely to attain consciousness and understanding.

Penrose’s lecture was followed by a Q&A moderated by MrStephen Ibaraki, (Futurist and Social Entrepreneur) who started by asking Penrose about his thoughts on the term ‘AI’. Penrose explained that he felt particularly ‘nervous about the word intelligence’. Intelligence commonly requires understanding, and understanding commonly requires awareness. As AI devices are not aware, they are not intelligent in the normal use of the word; while such a system can achieve a lot, it ‘doesn’t seem to know what it’s doing’. He suggested that instead of AI, we could adopt the term ‘artificial cleverness’.

At the same time, Penrose explained that there is still a lot of room for AI to further develop. We can continue to use our human understanding to improve the algorithmic system, integrating missing ingredients, and transforming it into something that ‘goes beyond what you had before’. Yet, without the quantum processes of microtubules taking place in human brains, could computers ever mimic conscious brain activities? Penrose explained that we might, some day in the far future, be able to construct such protoconscious elements in a laboratory. However, this would raise many ethical problems that we are not ready to face.

Opening the session, co-moderators Mr Dirk Krischenowski, dotBERLIN GmbH & Co. KG, and Ms Maarja Kirtsi, Estonian Internet Foundation/.ee, explained that the discussion will focus on issues related to innovation and competition on the domain name market, especially in the context of new generic top-level domains (gTLDs), launched by the Internet Corporation for Assigned Names and Numbers (ICANN) in 2014.

To kick-start the debates, Krischenowski gave an overview of a study conducted by ICANN on competition, consumer trust, and consumer choice in the domain name market. Some of the main findings of the study: new gTLDs contributed to the growth of the market; the sales channel integrated the new gTLDs quickly and lead to much greater consumer choice; many new registrar operators entered the market, especially in former under-developed markets; the number of registry operators increased by a factor of 60; typical TLDs are niche, targeted, and geographic TLDs. Overall, the New gTLD Program has lead to a dramatic increase in consumer choice, a modest increase in competition, and minimal impact on consumer trust.

Ms Elena Plexida, European Commission (EC), talked about the evaluation and revision process that the EC has launched with regard to the regulations for the .eu TLD. She explained that the .eu TLD was formally established by Regulation 733/2002, while EC Regulation 874/2004 set the rules for the registry and the .eu. The .eu TLD was delegated by ICANN in 2005. As the market has continuously changed, these regulations have become outdated, have generated administrative challenges and need a revision. Issues to be analysed during the evaluation process include: whether the .eu objectives have been achieved (to boost e-commerce and empower end-users to create a European digital identity), the legal separation between registry and registrars, whether the registry should be more active in other Internet governance areas (and how).

Mr Jörg Schweiger, DENIC e.G./.de outlined one issue of concern for the domain name industry: How to make sure that domains do not subsurface, in the sense that they exist from a technical point of view, but users are not really aware of them? The industry has been constantly looking for the ‘killer application’ to address this issue. He pointed out that one way to make domain names more attractive could be to build on the discussions about self-determination, sovereignty, and identity. The main objective of .de now is to retain as many domain names as possible, and that the direction the registry is growing in is not necessarily related to innovation per se, but rather to having a secure domain name space.

Ms Lianna Galstyan, Internet Society Armenia, said that the .am registry never had an objective to have a high number of domain name registrations, but rather, to give the community the possibility to register domain names under .am. The same rationale was also behind the launch of the Armenian Internationalised Domain Name (IDN).

Mr Ardi Jürgens, Zone Media OÜ, pointed out that domain names do not exist in a bubble; they are part of a system which includes resources and applications. A healthy growth in the demand for domain names could result in applications and people using domain names for creating value, either for them or society. In the search for a ‘killer application’, the industry should look at young people and try to find a way to create value for them within the domain name space. Compared to social media platforms, domain names have the main advantage of being under the control of the registrant, and this is something that the industry should try to communicate better.

Mr Andrea Beccalli, ICANN, discussed examples of innovation in the DNS, such as the new gTLDs, the introduction of IDN TLDs, and the DNS Security Extensions (DNSSEC). Even the community work on developing the rules and processes for the New gTLD Program can be seen as a form of innovation. Schweiger, however, argued that the new round of gTLDs does not necessarily means innovation, as it was simply presenting what was on the market already – TLDs. Moreover, most business models surrounding new gTLDs are similar to what had been on the market before their introduction, with only a few exceptions.

Security in the domain name space was mentioned during the discussions as an area that deserves more attention. There are troubling correlations between new gTLDs and ‘innovation in crime’, and there are service providers who have blocked all new gTLDs from their servers due to security concerns. Innovation on the security front should be a priority for new gTLDs. Privacy is also an issue that requires increased attention, as users are more and more demanding in this regard.

The risk of cybersquatting was also raised as an issue of concern for new gTLDs, with regard to the protection of trademarks. It was said that the current protection mechanisms (such as the sunrise period allowing trademark holders to register relevant domain names, and mechanisms for rights enforcement post domain name registration) are helpful, but not sufficient. Such issues are currently analysed within the ICANN framework.

At the end of the session, a point was raised – that it is not actually clear what is innovative in the domain name space, as TLDs have been in place for many years and they are basically the same ‘technology’ or ‘tool’ that they have been since the creation of the DNS.

Other resources

The set of guidelines contain recommendations on how to mitigate security threats and weaknesses in Internet of Things services. It includes guidelines for service ecosystems, endpoint ecosystems, and network operators.

Processes

Click on the ( + ) sign to expand each day.

The GIP Digital Watch observatory is provided by

in partnership with

and members of the GIP Steering Committee

GIP Digital Watch is operated by

GIP Digital Watch

Submit Content

The GIP Digital Watch observatory reflects on a wide variety of themes and actors involved in global digital policy and Internet governance. We welcome information and documents from your organisations. Submitted content will be reviewed and published by our team of knowledge curators.
You can submit your content at digitalwatch@diplomacy.edu