In February 2020 the European Commission released "A White paper on Artificial Intelligence - A European approach to excellence and trust" aiming to give a definition of AI, underlining it’s benefits and technological advances in different areas, including medicine, security, farming, as well as identifying it’s potential risks: opaque decision making, gender inequality, discrimination, lack of privacy. Based on a European strategy for AI presented in April 2018, the current white paper is a complex document analyzing strengths, weaknesses, opportunities of Europe in the global market of Artificial Intelligence.
In 2018 33 zettabytes of data was produced and it’s expected to exceed 175 zettabytes in 2025. The rapid development of new technologies and the increasing role of AI provokes a global competition and needs a global approach, in which Europe has to identify its own role. The European commission emphasizes creating multidisciplinary international cooperation practices between a private and public sector and academia. Furthermore, AI governance provokes debates and should guarantee maximum multi-stakeholders and multidisciplinarity in the European, national and international levels, and partnership between academia and the private and public sectors.
The Guidelines of the High-Level Expert Group identified seven key requirements: technical robustness and safety, human agency and oversight, privacy and data governance, transparency, diversity, non-discrimination and fairness, societal and environmental wellbeing, accountability.
With this, EU embraces the responsibility to addressing risk in the use and development of technologies, which must be developed according to the European values: to promote peace, to offer freedom, security, and justice, sustainable development, to combat discrimination, to ensure scientific and technological progress, to respect the culture and linguistic diversity.
Leadership in AI
Europe has an advantage for users and for technology development, a strong academic sector, innovative startups, and multiple manufacturing services in the fields of healthcare, finances, agriculture. Europe is also a leader of AI algorithmic foundations. In addition to this, a quarter of all industrial service robots are produced in Europe. Nevertheless, Europe has a weak position in developing applications for customers, as well as a lack of investment, skills, and trust in AI, which is a significant disadvantage in the use of data assets. The EU is a global leader in low-power electronics and neuromorphic solution, but the market of AI processor is dominated by non-EU players, European Processor Initiative can change this.
The objective of the EU now is to become an attractive, safe and efficient data-agile economy, the global leader in AI. With this, the EU wants to make sure the developing technologies will be beneficial for the European citizens, “improving their lives while respecting their rights”.
Increasing investments in AI
Over the past three years, EU funding for AI research has increased by 70% compared to the previous period and achieved an amount of €1.5 billion. To compare and show the need of the EU to increase the funds for AI research and development. In 2016 the EU invested in AI €3.2 billion, North America - around €12.1 billion in and Asia - €6.5 billion.
Europe holds a large amount of under-used public and industrial data and has a secure digital system with low-power consumption. In order to ensure global leadership, the EU supports the investment-oriented approach. Europe needs to significantly increase its investments in this sector and to do so, there is a need to invest in next-generation technologies by mobilizing private and public funding.
In December 2018 the Commission presented a Coordinated Plan aimed to force the AI progress development in Europe, proposing 70 joint actions in research, funding, market uptake, talent acquisition, international and multidisciplinary cooperation. The plan is to be adopted by the end of 2020. The objective of the European Union is to attract over €20 billions of investment per year from the Digital Europe Program, Horizon Europe as well as from the European Structural and Investment Fund.
Human-centric technologies, privacy as a fundamental human right.
The technologies have to be developed in compliance with EU rules, protecting fundamental rights and consumers aimed to give citizens confidence in AI systems, “European AI is grounded in our values and fundamental rights such as human dignity and privacy protection”. With this, Europe wants to ensure the trust to tech by citizens, saying that the trustworthiness necessary components in the tech development, which is impossible without expandability of opaque technologies, and from another perspective, considering poor awareness of digital users.
Protection of Human Autonomy & Agency
While the European citizens are afraid of algorithmic decision-making capacities, countries are struggling with the legal uncertainty. This document states, that AI is a collection of technologies that combine data, algorithms and computing power. All these three components can be biased, and by following, can lead to the material and immaterial harm and another unpredictable consequence. According to the EU, a significant role to achieve sustainable development goals (SDG), and ensure the democratic process and human rights. There should be concrete actions to protect human agency and autonomy and educate conscious digital citizens.
AI Ethics & research fragmentation
The complex nature of many new technologies results in cases where AI can be used to protect fundamental human rights, but can also be used for malicious purposes. As was mentioned above, international cooperation on AI matters must be based on the respect of fundamental rights, including human dignity, pluralism, inclusion, nondiscrimination and protection of privacy and personal data.
The big issue with AI Ethics is a research fragmentation. The current situation with a fragmented knowledge landscape is not acceptable anymore, so it is critical to creating synergies between the multiple European research centers for cooperation in research and will create testing centers. The aim of an updated Digital Education Action Plan is to reinforce tech skills.
EU position aims to promote the ethical use of AI. The ethical guidelines were developed by the High-Level Expert Group. EU was also closely involved in developing the OECD’s ethical principles for AI. The G20 subsequently endorsed these principles in June 2019. EU recognizes that important work on AI by UNESCO, the Council of Europe, OECD, WTO, and ITU. At the UN, the EU is involved in the follow-up of the report of the High-Level Panel on Digital Cooperation supporting regulatory convergence.
The purpose of this white paper is to set out policy options and legal frameworks, based on European fundamental values to become a global leader in innovation in the data economy and its applications, and to develop a benefic AI ecosystem for citizens, business and public interest on both national and international levels. The Report, which accompanies this white paper, analyses the relevant legal framework and underlines its uncertainty. In 2019 over 350 companies have tested this assessment list and sent feedback. A key result of the feedback process is that requirements are already reflected in existing legal or regulatory regimes, those regarding transparency, traceability, and human oversight are not specifically covered under current legislation.
The regulatory framework requires compliance with EU legislation, principles, and values: freedom of expression, freedom of assembly, human dignity, gender, race, ethnic origin, religion or belief, disability, age or sexual orientation nondiscrimination, protection of personal data and private life. The document says that AI needs to be considered during the whole lifecycle, but machine learning, especially deep learning, presents challenges involving explainability that problematizes compliance with some policy goals. Europe has an academic strength in quantum computing and quantum simulators, and the document encouragis increase in the availability of testing and experimentation facilities in this field.
In the white paper, it is clear that the EU legislative framework will include legislation. Some specific features of AI (e.g. opacity, complexity, unpredictability, and partially autonomous behavior) can be hard to verify and make the enforcement of legislation more difficult. As a result, in addition to the current legislation, new legislation specific to AI is needed.
The EU commission underlines the importance of improving digital literacy for all citizens and raising awareness of the issues related to data privacy, transparency, the definition of AI, data governance, responsibility, and trust and dual-use of technologies. The European Commission invited citizens to send comments and possible suggestions regarding this white paper.
The EC invites public comments by June 14.