Читать книгу Cyberphysical Smart Cities Infrastructures - Группа авторов - Страница 11

1.1 Introduction

Оглавление

In the article by Bryson and Winfield, “Standardizing Ethical Design for Artificial Intelligence and Autonomous Systems” [1], the authors explore several important concepts. One of the most important ones is how they define intelligence. According to the authors, intelligence requires: “The capacity to perceive contexts for action, the capacity to act, and the capacity to associate contexts to actions” [1]. This definition is important because we must be able to compare organic intelligence versus artificial to distinguish artificial intelligence (AI) as a new thought category. The other important concept that they discuss is the standardization of ethics as it applies to AI. According to Bryson and Winfield, standards set by consensus of a large group should include ethical implications and machine learning (ML) code, which powers AI, and should incorporate these ethics. While Bryson and Winfield discuss the importance of these ethical standards, they fail to discuss what these ethics should be, leaving it open to interpretation. In this chapter, this gap will be examined in effort to try to establish some status quo.

Continuing with exploring the ethical dilemma posed by AI technology, in February 2019, the AMA Journal of Ethics published an article entitled “Ethical Dimensions of Using Artificial Intelligence in Health Care” [2]. In this article, the role that AI plays in healthcare was explored, as well as the ethical implications. The main focus of this article was to find balance between the benefits of AI technology and the inherent risks associated with it.

Another article that provided important insight was “Artificial Intelligence in Medicine” by Hamet and Tremblay [3]. In this article, they describe that there are two main branches of AI in medicine: a physical branch and a virtual branch. Within the virtual branch, which can also be viewed as deep learning, there are three aspects, which are “(i) unsupervised (ability to find patterns), (ii) supervised (classification and prediction algorithms based on previous examples), and (iii) reinforcement learning (use of sequences of rewards and punishments to form a strategy for operation in a specific problem space)” [3]. In comparison, the physical branch largely involves robots that provide a variety of services and applications to both users and physicians.

In the official document, the National Artificial Intelligence Research and Development Strategic Plan [4], the future of AI is laid out. Over the course of eight strategies, the National Science and Technology Council outlines important steps needed that are priorities for Federal investment. “The Federal Government must therefore continually reevaluate its priorities for AI R&D investments to ensure that investments continue to advance the cutting edge of the field and are not unnecessarily duplicative of industry investments” [4]. Of the eight strategies, seven are continued over from the 2016 report. Due to the fact that these seven aspects are not new, the focus will be on the eight, and only new one. The eighth strategy focuses on the partnership between the federal government and academia, industry, and others involved in the research and development of advancement in AI to continue to generate breakthroughs. Although not as new, it does also address ethics and AI that will be used as that topic is explored.

In his article, “Hacking AI: Rethinking cybersecurity for artificial intelligence” [5], Davey Gibian explores how traditional cybersecurity is insufficient for evolving AI technologies. He also states that what is needed for cybersecurity is “two algorithm‐level considerations: robustness and explainability” [5]. One interesting point that he goes on to make under “robustness” is talking about eliminating bias as part of AI cybersecurity. In this report, we will examine how this bias can be caused by ethics implemented into AI.

The concept of traditional cybersecurity insufficient for modern and future AI technology is also supported by Ilja Moisejevs in his article “What everyone forgets about machine learning” [6]. Here, he briefly goes through the history of cybersecurity and cyber threats. He then goes on to explain the need for cybersecurity in ML and the impact that failure to implement this can cause (Figure 1.1).

Cyberphysical Smart Cities Infrastructures

Подняться наверх