The Centaur's Dilemma
Реклама. ООО «ЛитРес», ИНН: 7719571260.
Оглавление
James E. Baker. The Centaur's Dilemma
Отрывок из книги
The Centaur’s Dilemma
NATIONAL SECURITY LAW FOR THE COMING AI REVOLUTION
.....
AI philosophers prefer a different example to explain the moral and other limitations in current AI—the ubiquitous crosswalk dilemma or “trolley problem,” a famous ethical thought experiment. The scenarios vary. Imagine two persons entering a crosswalk: one a bank robber fleeing the scene of a crime, the other a pregnant woman running after a child. Here comes a car or trolley. The driverless vehicle AI is likely to calculate what to do based on mathematical inputs that might predict the course with the highest probability of avoiding both individuals, and if that is not possible, to be certain to avoid at least one of the individuals, likely seeing the pedestrians as having equivalent value. But the calculations will be based on what is already embedded in the machine’s software and training data, not the new contextual information on site, in the moment, about the characteristics of the people within the crosswalk. Engineers refer to AI that lacks this sort of situational awareness and flexibility as being “brittle.” In contrast, a human driver, if alert, will adjust and select a new course of action based on experience, judgment, intuition, and moral choice involving the actual pedestrians, erring we assume on the side of missing the pregnant woman at the risk of hitting the bank robber.
Alas, there are real-world examples of this problem. In 2018, a driverless Uber test vehicle hit and killed a bicyclist at night on a Phoenix road. There was no moral dilemma to address, the vehicle’s sensors and computer failed because they were not trained to identify a bicycle at night. According to press reports, the computer initially classified the person and bike as “an unrecognized object” apparently without reference to the human on board. The vehicle eventually sought to stop, but not in time. Neither did the human safety driver in the test vehicle respond in time.18 No wonder there are strongly held views about the safety of driverless cars; proponents seek to deter anecdotal reasoning and invite consideration of trend lines and safety percentages in comparison to human drivers. The case is presented here not to take sides in the driverless car debate, but because it illustrates a present weakness in narrow AI. Policymakers and lawyers should now imagine how this lack of situational awareness might affect military applications of AI.
.....