Читать книгу The Tyranny of Numbers: Why Counting Can’t Make Us Happy - David Boyle - Страница 21

I

Оглавление

But suppose you get everything you want, wondered John Stuart Mill at the start of his first nervous breakdown and his rejection of Bentham’s puritanical legacy: ‘Would this be a great joy and happiness to you?’ And the irrepressible self-consciousness distinctly answered, ‘No!’

Mill’s irrepressible self-consciousness definitely got it right. The human psyche is too complex and far too fleeting to be pinned down in quite that way. You can carry out Bentham’s calculations of happiness with incredible accuracy, you can measure what you want precisely, but somehow the psyche slips away and sets up shop somewhere else. Or as Gershwin put it: ‘After you get what you want, you don’t want it’. While Mill was locking himself into his bedroom, Samuel Taylor Coleridge was coming to similar conclusions: ‘But what happiness?’ he said to the Benthamites with a rhetorical flourish. ‘Your mode of happiness would make me miserable.’ Mill was having his collapse 30 years before Freud was even a flicker in his father’s eye, and the idea that human beings might secretly want something different from what they think they want was untested and unfamiliar. Yet Mill instinctively knew that measuring happiness was just too blunt an instrument.

Generations later we make the same kind of discoveries ourselves over and over again. But we tend to solve the problem by measuring ever more ephemeral aspects of life, constantly bumping up against the central paradox of the whole problem, which is that the most important things are just not measurable. The difficulty comes because they can almost be counted. And often we believe we have to try just so that we can get a handle on the problem. And so it is that politicians can’t measure poverty, so they measure the number of benefit claimants instead. Or they can’t measure intelligence, so they measure exam results. Doctors measure blood cells rather than health, and people all over the world measure money rather than love. They might sometimes imply almost the same thing, but often they have little to do with each other.

Anything can be counted, say the management consultants McKinsey & Co., and anything you can count you can manage. That’s the modern way. But the truth is, even scientific measurement has its difficulties. Chaos theory showed that very tiny fluctuations in complex systems have very big consequences. Or as the gurus of chaos theory put it: the flapping of a butterfly’s wings over China can affect the weather patterns in the UK. The same turned out to be true for other complex systems, from the behaviour of human populations to the behaviour of share prices, from epidemics to cotton prices.

The man who, more than anybody else, undermined the old idea that measurements were facts was a Lithuanian Jew, born in Warsaw before the war, the son of a clothing wholesaler who found himself working for IBM’s research wing in the USA. Benoit Mandelbrot is probably the best known of all the pioneers of chaos because of the extraordinary patterns, known as fractals, that he introduced by running the rules of chaos through IBM’s computers. And he got there with a simple question that makes the kind of statistical facts the Victorians so enjoyed seem quite ridiculous. The question was: ‘How long is the coastline of Britain?’

On the face of it, this seems easy enough. You can find the answer in encyclopaedias. But then you think about it some more and you wonder whether to include the bays, or just to take a line from rock to rock. And having included the bays, what about the sub-bays inside each bay? And do you go all the way round each peninsula however small? And having decided all that, and realizing that no answer is going to be definitive, what about going round each pebble on the beach? In fact the smaller you go, to the atomic level and beyond, the more detail you could measure. The coastline of Britain is different each time you count it and different for everyone who tries.

There was a time when accountants were able to deal with this kind of uncountable world better than they are now. In the early days of the American accountancy profession, they were urged to avoid numbers. ‘Use figures as little as you can,’ said the grand old man of American accounting James Anyon, who came from Lancashire. ‘Remember your client doesn’t like or want them, he wants brains. Think and act upon facts, truths and principles and regard figures only as things to express these, and so proceeding you are likely to become a great accountant and a credit to one of the truest and finest professions in the land.’

Anyon had arrived in the USA in 1886, to look after the firm of Barrow, Wade, Guthrie and Co – set up three years before by an English accountant who realized that there was a completely vacant gap in the accountancy market in New York City. Unfortunately, the day he arrived, he was threatened with violence by the very large chief assistant, who had been secretly trying to take the enterprise over. The case ended up in the Supreme Court. Anyon survived the ordeal and 30 years later, he was giving advice to young people starting out in what was still a new profession. ‘The well trained and experienced accountant of today … is not a man of figures,’ he explained again.

But Anyon’s successors ignored his advice, and for a very familiar reason. The public, the politicians and their business clients wanted control. They wanted pseudo-scientific precision, and were deeply disturbed to discover that accountancy was not the exact science they thought it was. Every few years, there was the traditional revelation of a major fraud or gigantic crash, and a shocked public could not accept that accounts might ever be drawn up in different ways. How could two accountants come to different conclusions? How could some companies keep a very different secret set of accounts?

This issue was brought up by Pacioli himself, who said that even in his day some people kept two sets of books: one for customers and one for suppliers. In the First World War, Lloyd George once remarked that the War Office kept three sets of casualty figures, one to delude the cabinet, one to delude the public and one to delude itself. Anyone reading the public accounts of some companies will realize this practice has life in it yet. But as the centuries passed, it has become more and more of an issue, and accountants have been on the front line of solving the resulting confusions. The Western world is now awash with consultants and accountants who will accept a large fee to come into your organization, measure the way you work, test your assumptions and your profits, or lack of them, measure the mood of your employees and customers, and tell the public.

The National Audit Office and the Audit Commission arrived in the world in the early 1980s and set to with a vengeance. The British Standards Institute organized a standard of quality, then called BS5750, which auditors could measure accountants’ achievements by. Environmental quality standards followed, and the whole range now available across the world, US, European, global standards, and auditors behind each one – measuring, measuring, measuring. By 1992, environmental consultancy alone was worth $200 billion. Counting things is a lucrative business. Which is one of the reasons the private sector auditing firms, like Arthur Andersen, PricewaterhouseCoopers and KPMG entered such a boom period in the 1980s. By 1987, they were creaming off as many as one in ten university graduates. It is one of the paradoxes of the modern world that the failure of auditors is expected to be solved by employing more auditors. And the trouble with auditors of any kind (accountants or academics) is that they are applying numerical rules to very complex situations. They wear suits and ties and have been examined to within an inch of their lives about their understanding of the professional rules. But their knowledge of life outside the mental laboratory may not be very complete. Sometimes it’s extremely sketchy. And when Western consultants arrive in developing countries with their clipboards, like so many Accidental Tourists, it can do a great deal of damage.

Just how much damage can be done by faulty figures has been revealed in an extraordinary exposé by the development economist Robert Chambers. The number-crunchers he described like innocents abroad, deluded by local elders in distant villages. Sometimes deliberately. As a result of what may have been an elderly insect-damaged cob, consultants convinced themselves during the 1970s that African farmers were losing up to 40 per cent of their harvest every year. The real figure was around 10 per cent, yet American aid planners diverted up to $19 million a year by the early 1980s into building vast grain silos across Africa to tackle a problem that didn’t exist.

Then there was the UN Food and Agriculture Organization’s notorious questionnaires in the early 1980s, which completely ignored mixed farms in developing countries. They only asked about the main crop, anything else was too complicated. As a result, production rates in developing countries seemed so low that multinationals believed they needed genetically-manipulated seeds to help cut famine. But then, as Emerson said, people only see what they want to see. That’s the trouble with questionnaires.

‘Professional methods and values set a trap,’ says Chambers in his book Whose Reality Counts?:

Status, promotion and power come less from direct contact with the confusing complexity of people, families, communities, livelihoods and farming systems, and more from isolation which permits safe and sophisticated analysis of statistics … The methods of modern science then serve to simplify and reframe reality in standard categories, applied from a distance … Those who manipulate these units are empowered and the subjects of analysis disempowered: counting promotes the counter and demotes the counted.

Auditors deal in universal norms, methods of counting, targets, standards – especially in disciplines like psychology and economics that try to improve their standing by measuring. This is how economics transformed itself into econometrics, psychology transformed itself into behavioural science, and both gained status – but all too often lost their grip on reality. Sociologists tackled their perceived lack of ‘scientific’ respectability by organizing bigger and bigger questionnaires to confirm what people knew in their heart of hearts anyway. Even anthropologists, who need a strong dose of interpretation provided by the wisdom, understanding and imagination of a researcher on the ground, began to lose themselves in matrices and figures. Scientists have to simplify in order to separate out the aspect of truth they want to study – and it’s the same with any other discipline that uses figures.

Often it’s only the figures that matter, even when everybody knows they are a little dodgy. One paper on this phenomenon by the economist Gerry Gill – called ‘OK the data’s lousy but it’s all we’ve got’ – was a quote by an unnamed American economics professor explaining his findings at an academic conference. Which is fine, of course, unless the data is wrong – because people’s lives may depend on it. ‘Yet professionals, especially economists and consultants tight for time, have a strongly felt need for statistics,’ says Chambers. ‘At worst, they grub around and grab what numbers they can, feed them into their computers, and print out not just numbers but more and more elegant graphs, bar-charts, pie diagrams and three-dimensional wonders of the graphic myth with which to adorn their reports and justify their plans and proposals.’

Chambers found that there were twenty-two different erosion studies in one catchment area in Sri Lanka, but the figures on how much erosion was going on varied by as much as 8,000-fold. The lowest had been collected by a research institute wanting to show how safe their land management was. The highest came from a Third World development agency showing how much soil erosion was damaging the environment. The scary part is that all the figures were probably correct, but the one thing they failed to provide was objective information. For that you need interpretation, quality, imagination.

‘In power and influence, counting counts,’ he wrote. ‘Quantification brings credibility. But figures and tables can deceive, and numbers construct their own realities. What can be measured and manipulated statistically is then not only seen as real; it comes to be seen as the only or the whole reality.’ Then he ended up with a neat little verse that summed it all up:

Economists have come to feel

What can’t be measured, isn’t real.

The truth is always an amount

Count numbers, only numbers count.

But the distinctions really get blurred when politicians start using numbers. Waiting lists up 40,000, Labour’s £1000 tax bombshell, fertility down to 1.7, 22 Tory tax rises – elections are increasingly a clash between competing statistics. It’s the same all over the world. Figures have a kind of spurious objectivity, and politicians wield them like weapons, swinging them about their heads as they ride into battle. They want to show they have a grasp of the details, and there is something apparently hard-nosed about quoting figures. It sounds tough and unanswerable.

But most of the time, the figures also sound meaningless. The public don’t take them in, and they simply serve as a kind of aggressive decoration to their argument. But, as politicians and pressure groups know very well, a shocking figure can every so often grasp the public’s imagination. In the UK, the best-known election policy for the 1992 general election – repeated in the 1997 election – was the Liberal Democrat pledge to put 1p on income tax for education. It sounded clear and costed, but it was the perfect example of a number being used for symbolic effect. It implied real commitment and risk: the 1p meant almost nothing. ‘If relying on numbers didn’t work,’ said Andrew Dilnot of the Institute of Fiscal Studies in a recent BBC programme, ‘then in the end a whole range of successful number-free politicians would appear.’

They haven’t appeared yet. The problem for politicians is that they have to use figures to raise public consciousness, but find that the public doesn’t trust them – and the resulting cacophony of figures tends to drown out the few that are important. The disputes of political debate have to be measurable, but they get hung up about measurements that only vaguely relate to the real world.

Take rising prices. You can’t see them or smell them, so you need some kind of index to give you a handle on what is a real phenomenon. You can’t hold them still while you get out your ruler, yet the ersatz inflation figures have assumed a tremendous political importance. We think inflation is an objective measure of rising prices, when actually it is a measurement based on a random basket of goods which has changed from generation to generation. In the 1940s, it included the current price of wireless sets, bicycles and custard powder. In the 1950s, rabbits and candles were dropped in favour of brown bread and washing machines. The 1970s added yoghurt and duvets, the 1980s added oven-ready meals and videotapes, and the 1990s microwave ovens and camcorders. It’s a fascinating measure of our changing society, but it isn’t an objective way of measuring rising prices over a long period of time.

The Tyranny of Numbers: Why Counting Can’t Make Us Happy

Подняться наверх