Читать книгу Societal Responsibility of Artificial Intelligence - Группа авторов - Страница 15

MACHINE LEARNING.–

Оглавление

Machine learning concerns the design, analysis, development and implementation of methods that allow a machine (in the broadest sense) to evolve through a systematic process, and, thus, perform tasks that are difficult or impossible to perform by more traditional algorithmic means. The algorithms used allow, to a certain extent, a computer-controlled (possibly a robot) or computer-assisted system to adapt its analyses and response behaviors based on the analysis of empirical data from a database or sensors.

In our view, adopting the machine learning method is no longer just a utility, but rather a necessity. Thus, in light of the digital transition and this “war of intelligences” (Alexandre 2017), companies will be the target of a major transformation and will invest in AI applications in order to:

 – increase human expertise via virtual assistance programs;

 – optimize certain products and services;

 – bring new perspectives in R&D through the evolution of self-learning systems.

Therefore, AI holds great promise, but also strong fears, hazards and dangers that must be corrected or even removed, to ensure an implementation that is in accordance with the legal framework, moral values and ethical principles, and the common good. The conflicts in question can be very varied. Indeed, machines like robotic assistants ultimately ignore the concepts of good and evil. They need to be taught everything. Autonomous cars are likely to involve us in accidents or dangerous situations. Some conversational agents may insult or give bad advice to individuals and not be kind to them.

Thus, even if today, ethical recommendations have little impact on the functional scope of AI and introduce an additional level of complexity in the design of self-learning systems, it becomes essential, in the future, to design and integrate ethical criteria around digital projects related to AI.

Several standards dealing with algorithmic systems, transparency, privacy, confidentiality, impartiality and more generally with the development of ethical systems have been developed by professional associations such as the IEEE (Institute of Electrical and Electronics Engineers) and the IETF (Internet Engineering Task Force)3.

To this can be added documents focusing on ethical principles related to AI, such as:

 – the Asilomar AI Principles, developed at the Future of Life Institute, in collaboration with attendees of the high-level Asilomar conference of January 2017 (hereafter “Asilomar” refers to Asilomar AI Principles, 2017);

 – the ethical principles proposed in the Declaration on Artificial Intelligence, Robotics and Autonomous Systems, published by the European Group on Ethics in Science and New Technologies of the European Commission, in March 2018;

 – the principles set out by the High-Level Expert Group on AI, via a report entitled “Ethics Guidelines for Trustworthy AI”, for the European Commission, December 18, 2018;

 – the Montreal Declaration for AI, developed at the University of Montreal, following the Forum on the Socially Responsible Development of AI of November 2017 (hereafter “Montreal” refers to Montreal Declaration, 2017);

 – best practices in AI of the Partnership on AI, the multi-stakeholder organization – composed of academics, researchers, civil society organizations, companies building and utilizing AI academics, researchers, civil society organizations and companies building and utilizing AI – that, in 2018, studied and formulated best practices in AI technologies. The objective was to improve public understanding of AI and to serve as an open platform for discussion and engagement on AI and its influences on individuals and society;

 – the “five fundamental principles for an AI code”, proposed in paragraph 417 of the UK House of Lords Artificial Intelligence Committee’s report, “AI in the UK: Ready, Willing and Able”, published in April 2018 (hereafter “AIUK” refers to House of Lords, 2018);

 – the ethical charter drawn up by the European Commission for the Efficiency of Justice (CEPEJ) on the use of AI in judicial systems and their environment. It is the first European text setting out ethical principles relating to the use of AI in judicial systems (see Appendix 1);

 – the ethical principles of Luciano Floridi et al. in their article entitled “AI4People – An Ethical Framework for a Good AI Society: Opportunities, Risks, Principles, and Recommendations”, Minds and Machines, December 2018;

 – the OPECST (Office parlementaire d’évaluation des choix scientifiques et technologiques) report (De Ganay and Gillot 2017);

 – the six practical recommendations of the report of the CNIL (Commission nationale de l’information et des libertés)4 on the ethical issues of algorithms and AI, drafted in 2017 (see Appendix 2);

 – the report published by the French member of parliament Cédric Villani (2018) on AI;

 – the Declaration on Ethics and Data Protection in the Artificial Intelligence Sector, at the 40th International Conference of Data Protection and Privacy Commissioners (ICDPPC), Tuesday, October 23, 2018, in Brussels;

 – the seven guidelines5 developed by the European High Level Expert Group on AI, published on April 8, 2019 by the European Commission;

 – the five principles set out in the OECD Council Recommendation on the development, implementation and use of AI, adopted on May 22, 2019, by the Council to OECD Ministers6.

What is the best practice of ethical frameworks, regulations, technical standards, and best practices that are environmentally sustainable and socially acceptable? It is clear that these shared frameworks do not guarantee success. Mistakes and illegal behavior continue to occur. But their availability requires a clear and precise idea of what needs to be done and how to evaluate competing solutions.

This diversity of approaches and initiatives on the subject reflects the major challenge of establishing a common framework for ethical governance of AI. This raises a delicate and decisive question: how should the ethical governance of AI be defined, or by which characteristics? What are the “measurable” values, translating notions of loyalty, responsibility, trust and thus ethics applied to algorithmic decisions when they are the consequence or the result of a prediction?

It is from this vision of universalization that we felt the need to write this book around the framework of AI applicable to all. As a result, we have developed a moral framework to support digital AI projects by observing a number of requirements, recommendations and rules, elaborated, verified and discussed at each stage of design, implementation and use. This allowed us to design ethical criteria, according to our determinants, both essential and universal, based on the principle of Ethics by Design7 or Human Rights by Design to move toward a totally innovative principle of Ethics by Evolution that we will develop throughout this book. The objective is to achieve AI that is safer, more secure and better adapted to our needs, both ethical and human, over time. This will help optimize our ability to monitor progress against criteria of sustainability and social cohesion. AI is, therefore, not an end in itself, but rather a means to increase individual and societal well-being.

Societal Responsibility of Artificial Intelligence

Подняться наверх