Читать книгу Neuropolis: A Brain Science Survival Guide - Robert Newman, Robert Newman - Страница 12
ОглавлениеThe benchmark for Artificial Intelligence (AI) is the famous Turing Test. Alan Turing’s 1950’s thought-experiment states that if a robot can convince you that you’re talking to another human being, then that robot can be said to have passed the Turing Test, thereby proving that there is nothing special about the human brain that a sufficiently powerful computer couldn’t do just as well.
Except the Turing Test proves no such thing. All it proves is that humans can be tricked, but everyone knew that already … except Alan Turing, alas, who in the last week of his life – and this is a true story – went to a funfair fortune-teller on Blackpool promenade. Nobody knows what the Gypsy Queen told him, but he emerged from her tent white as a sheet and killed himself two days later. But funfairs have had centuries of practice in the art of tricking punters.
Weirdly, a funfair nearly did for Isaac Newton. In a posthumous biographical sketch, his friend John Wickens says that when they went to Sturbridge County Fair, Newton had a complete meltdown, and was close to jettisoning his whole theory of how gravity acts on every object in the universe, after what Wickens describes as: ‘a frustrating hour at the coconut shy’.
In an interview with The Times about Artificial Intelligence, Brian Cox said:
There is nothing special about human brains. They operate according to the laws of physics. With a sufficiently complex computer, I don’t see any reason why you couldn’t build AI. We’ll soon have robot co-workers, the difference is we’ll even be taking them to the office party.
I wrote a letter to The Times. They didn’t print it. I don’t why. It was quite short. It just said: ‘No we fucking won’t’.
Emotional robots are a vision of the future to be found in the Gypsy Queen’s crystal ball but not in science. Not least because of these two uncontroversial scientific facts:
1. We are not machines, we are animals.
2. No experiment performed by anyone anywhere in the whole world at any time has found a shred of evidence to suggest the remotest possibility that a ‘sufficiently complex computer’ will ever be able to do literally the first thing that a mammalian brain does, and experience emotion.
We came crying hither.
Thou know’st the first time that we smell the air
We wawl and cry …
But to listen to AI cultists you’d think we were knee-deep in this sort of evidence. According to Radio 4’s Inside Science program, for example, we’ll soon have robot lawyers.
A senior IBM executive explained to Inside Science listeners that while robots can’t do the fiddly manual jobs of gardeners or janitors, they can easily do all that lawyers do, and will soon make human lawyers redundant.
Interestingly, however, when IBM Vice President Bob Moffat was himself on trial in the Manhattan Federal Court, accused in 2010 of the largest hedge-fund insider trading in history, he hired one of those old-time humanoid defence attorneys. A robot lawyer may have saved him from being found guilty of two counts of conspiracy and fraud, but when push came to shove, the IBM VP knew there’s no justice in automated law.
Not all the gigabytes in the world will ever make a set of algorithms a fair trial. There can be no justice in the broad sense without procedural justice in the narrow sense. Even if the outcome of a jury trial is identical to the outcome of an automated trial, due process leaves one verdict just and the other unjust. Justice entails being judged by flesh and blood citizens in a fair process. Not least because victims increasingly demand that the court consider their psychological and emotional suffering – which computers cannot do.
There’s a curious contradiction here that nobody ever talks about: at the same time as science proclaims its moral neutrality, proponents of AI want machines to become moral agents. Never more so than with what Nature has taken to calling ‘ethical robots’.
Ethical robots it seems will come as standard fittings on the driverless cars being developed by Apple, Google and Daimler. They will answer the big questions, automatically …
Should driverless cars be programmed to mount the pavement to avoid a head-on collision? Should they swerve to hit one person in order to avoid hitting two? Two instead of four? Four instead of a lorry full of hazardous chemicals? This is what the ‘ethical robot’ fitted into each driverless car will decide. How will it decide? In July 2015, Nature published an article, ‘The Robot’s Dilemma’, which explained how computer scientists:
have written a logic program that can successfully make a decision … which takes into account whether the harm caused is the intended result of the action or simply necessary to it.
Is the phrase ‘simply necessary’ chilling enough for you?
One of the computer scientists behind this logic program argues that human ethical choices are made in a similar way: ‘Logic’, he says, ‘is how we … come up with our ethical choices.’
But this can scarcely be true. For good or ill, ethical choices often fly in the face of logic. They may come from gut instinct, natural cussedness, a desire to show off, a vague inkling, a shudder, a sense of unease, or a sudden imaginative insight.
I am marching through North Carolina with the Union Army, utterly convinced that only military victory over the Confederacy will abolish the hateful institution of slavery. But I no sooner see the face of the enemy – a scrawny, shoeless seventeen-year old farm boy – than I throw away my gun and run sobbing from the battlefield. This is an ethical decision resulting in decisive action, only it isn’t made in cold blood, and it goes against the logic of my position.
Computer scientists writing the logic program for an ethical robot may appear as modern as modern can be, but their arguments come from the 1700s. The idea that ethics are logical appeals to what – in another context – Hilary Putnam describes as:
the comfortable eighteenth century assumption that all intelligent and well-informed people who mastered the art of thinking about human actions and problems impartially would feel the appropriate ‘sentiments’ of approval and disapproval in the same circumstances unless there was something wrong with their personal constitution.*
* Hilary Putnam, The Collapse of the Fact/Value Dichotomy and Other Essays, 2002.
The thinking may be strictly 1700s, but the technology isn’t. The US Department of Defense is at work on tiny rotorcrafts known as FLACs (Fast Lightweight Autonomous Crafts) that will that will be able to go inside flats and houses, office blocks and restaurants and deliver a one-gram explosive charge to puncture the cranium. These FLACs are types of Lethal Automative Weapons Systems. (LAWS). If drones weren’t bad enough, LAWS are on a whole new level. With drones, a human always makes the decision whether to kill, from however far away. But LAWS are a break with tradition. They are fully autonomous.