Читать книгу The Bleeding Edge - Bob Hughes - Страница 11

Оглавление

1

Technofatalism and the future: is a world without Foxconn even possible?


The routine assumption is that progress – particularly high-tech progress – depends on inequality, on a world of capitalist entrepreneurs, low-paid factory workers and toxic waste dumps. Yet every major development in the computer’s history arose from voluntary initiative or public funding rather than corporate research. The historical evidence suggests that innovation and creativity thrive in egalitarian settings and are stifled by competition. Far from deserving credit for the computer revolution, capitalism has driven it down a narrow and barren path, and might even have turned it into ‘a revolution that didn’t happen’.

In May 2010 world media picked up a report from the Hong Kong-based China Labour Bulletin that desperate workers were killing themselves by throwing themselves out of the windows of the vast Foxconn factory in Shenzhen, in China’s Guangdong province, where the Apple iPhone was being manufactured.1 Newspaper columnist Nick Cohen wondered what could be done to alleviate the situation, or even to stimulate some sense of outrage about it, but drew a blank:

A boycott of Foxconn’s products would not just mean boycotting Apple, but Nintendo, Nokia, Sony, HP and Dell too. Boycott China and you boycott the computer age, which, despite the crash, effectively means boycotting the 21st century, as we so far understand it.2

Cohen’s ‘as we so far understand it’ does at least hint at a recognition that ‘another world is possible’, but he did not pursue the idea. The phrase ‘a quick trip back to the Stone Age’ seems to lurk not far away.

It’s drummed into us that all good things come at a price. If we want nice things, someone must pay for them: you can’t have modern, high-tech luxuries and happy workers, clean rivers, lovely woodlands and country lanes thick with butterflies in summer.

It seems impossible to live what we think of as ‘a normal life’ in what we’ve learned to call ‘a modern country’ without being complicit in human immiseration or environmental destruction. Not even the poor can avoid complicity; in fact, they least of all – from the 19th-century factory-hands in their slave-grown cotton clothes, sustained by Indian tea sweetened with slave-grown sugar, to the 21st-century migrants whose very existence can depend on having a mobile phone. ‘Progress’ apparently requires inequality.

The range of a modern economy’s inequality is astonishing and all of it is packed into its most popular products. As technology advances so does the range of the inequality that’s drawn into its web. Must it be so? Today’s iconic electronic products, like yesterday’s cotton ones, embody the greatest range of human inequality currently possible. Most of us know at least some of the facts: the toxic waste mountains; the wholesale pollution of the environments where the copper, gold, tin and rare-earths are extracted, in countries where life itself has never been so cheap; the sweated labor; and so on.

TWO PARADOXES ABOUT NEW TECHNOLOGY

Yet here’s the first of two key paradoxes: when you look at what actually happens when technological progress is made, you find very little to support the idea that progress demands inequality – and even some mainstream economists recognize this. World Bank economist Branko Milanovic, for example, concluded a large-scale study of inequality and economic growth in history like this:

The frequent claim that inequality promotes accumulation and growth does not get much support from history. On the contrary, great economic inequality has always been correlated with extreme concentration of political power, and that power has always been used to widen the income gaps through rent-seeking and rent-keeping, forces that demonstrably retard economic growth.3

This is especially and manifestly true when looking at the present system’s ‘jewel in the crown’: the computer. The thing we know (or think we know) as ‘the computer’ emerged in conspicuously egalitarian settings, and it wouldn’t go on functioning for very long if inequality ever succeeded in its quest to invade every nook and cranny of the industries that support it.

The computer in your hand may have arrived there via a shocking toboggan-ride down all the social gradients known to humanity, but inequality was conspicuously absent at its birth, largely absent during its development, and remains alien to computer culture – so alien that the modern economy has had to create large and expensive ‘egalitarian reservations’ where the essential work of keeping the show on the road can be done in a reasonably harmonious and effective manner. The New Yorker’s George Packer has described4 how the leading capitalist companies (Google, Microsoft, Apple and the like) have even built their own, luxurious, egalitarian ‘villages’ and ‘campuses’ where their programmers and other creative types are almost totally insulated from the extreme inequality around them, and can believe they have moved beyond capitalism into a new egalitarian age.

More than half of the world’s computers and smartphones, more and more of its electronic appliances, and nearly all of the internet, depend on software created by freely associating individuals, in conscious defiance of the management hierarchies and profit-driven, intellectual property (IP) regime that underpin giants like Apple. Richard Stallman, founder of the ‘Free Software’ movement, sees any attempt to take ownership of the process as an affront to humanity. Of intellectual property law, Stallman has said:

I consider that immoral… and I’m working to put an end to that way of life, because it’s a way of life nobody should be part of.5

Free-market hawks may sneer at such idealism but their world would simply not exist without people like Stallman. Even if it did, it would not work very well without Stallman’s brainchild, the computer operating system known as GNU/Linux, and the global network of unpaid collaborators who have developed and continue to develop it. Google itself is built on GNU/Linux, as are Facebook and other social-media sites, and even the computers of the New York Stock Exchange: GNU/Linux is faster and more robust than the commercial alternatives.

Stallman wrote and published GNU in 1983 as an alternative to the older, established Unix operating system,6 with the difference that all of the code was freely available to anyone who wanted it, and could be changed and improved by anyone capable of doing so (hence ‘free and open source’, or FOSS). Stallman’s only stipulation was that nobody could own the code, and any modifications must be shared. GNU became ‘GNU/Linux’ after 1991, when a teenage fan of Stallman’s work, Linus Torvalds, started to circulate the code for the ‘kernel’ that allows GNU to run on different kinds of computers.7 This made it possible to use GNU/Linux (now generally known simply as Linux) on just about every kind of computing device that exists, including automobile engines, avionics, industrial appliances, power stations, traffic systems and household appliances.

The second paradox is that, while the new technologies are in principle supremely parsimonious, their environmental impact has turned out to be the exact opposite. Each wave of innovation needs fewer material inputs than its predecessor to do the same amount of work – yet in practice it consumes more resources. The Victorian economist William Stanley Jevons (see Chapter 5) was the first to draw attention to this paradox, which now bears his name – and it becomes even more striking when considering all the industries and activities that depend on or are mediated by electronics and computers. As economic activity has been computerized, it has become more centralized, and its overall environmental impact has increased – as have control by capital of labor and of people, the wealth-differences between rich and poor, and the physical distances between them.

Is this mounting impact an inevitable ‘price of progress’, or is it the result of progress falling into the hands of people and a system that simply cannot deal with it responsibly?

WHAT IS TECHNOLOGY ANYWAY?

It is important to challenge two conventional assumptions that are often made about technology: first, that we have capitalism to thank for it; and second, that it follows a predetermined course, that the future is waiting to be revealed by clever minds and that progress ‘unfolds’ from Stephenson’s Rocket to the automobile, DVDs and the iPhone.

The economist Brian Arthur, who has made a lifetime study of technological change, argues that human technology is a true, evolutionary phenomenon in the sense that, like life, it exploits an ever-widening range of natural phenomena with ever-increasing efficiency: hydraulics, mechanical, electrical phenomena and so on. He defines technology as:

a phenomenon captured and put to use. Or more usually, a set of phenomena captured and put to use… A technology is a programming of phenomena to our purposes.8

Technology develops through greater and greater understanding of the phenomena, and what they can be made to do, and how they can be coaxed into working together. Arthur uses the analogy of mining: easily accessed phenomena are exploited first (friction, levers) then ‘deeper’, less accessible ones (like chemical and electrical phenomena). As understanding of the phenomena deepens, their essential features are identified for more precise exploitation: the process is refined so that more can be done with less.

As the ‘mining of nature’ proceeds, what once seemed unrelated ventures unexpectedly break through into each other’s domains, and link up (as when magnetism and electricity were discovered to be the same phenomenon in the late 18th century). No technology is primitive; all of it requires bodies of theory, skill and experience; it tends inexorably to greater and greater economy of material means. He describes how phenomena – for example, friction being used to make fire – are exploited with increasing efficiency as they are worked with, played with and understood.

The parallels with biology are striking. Technology is just like a biological process – and there is a tendency at this point (which Arthur goes along with somewhat) to start thinking of technology as ‘a new thing under the sun’ with a life of its own, and rhapsodizing about ‘where it is taking us’.

If you only look at the technologies themselves, in isolation, the parallels are there, including the tendency to see computer code as space-age DNA, and to sit back and be awed as some brave new world unfolds. But what really distinguishes human techology from biological evolution, surely, is that it all happens under conscious, human control – which implies some important differences.

Technologies, unlike living organisms, can inherit acquired traits, and features of unrelated technologies can, as it were, ‘jump species’, as when turbine technology migrated from power stations into jet engines, and punched-card technology for storing information spread from the textile industry (the Jacquard loom) into the music industry (the pianola) and then to computing. The eclectic human agency responsible for this cross-fertilization is well demonstrated by the Victorian computer pioneer Charles Babbage, who was continually investigating the arcane processes developed in different industries – and made the connection with the Jacquard loom at an exhibition in 1842, as did his future collaborator, Ada Lovelace.9

This is even more the case where electronics and computers are concerned – a point that Brian Arthur makes: ‘Digitization allows functionalities to be combined even if they come from different domains, because once they enter the digital domain they become objects of the same type – data strings – that can therefore be acted upon in the same way’.10 Digitization is, moreover, just one of the possible techniques for doing this, as will be explained later. The underlying and really powerful principle is that phenomena from utterly different domains of experience may share a deeper, abstract reality that can now be worked with as if it were a physical thing in itself.

Most importantly of all, technological evolution need never have dead ends – and this is where we come slap-bang up against the contradiction that is today’s technological environment, in which promising technologies can be ditched, apparently never to return, within months of their first appearance.

TECHNOLOGY SHOULD HAVE NO DEAD ENDS

In principle – and in practice for most of the millennia that our technological species has existed – ideas that have ‘had their day’ are not dead and buried for ever. Human culture normally sees to that. Technological improvements can and should be permanent gains – inventions should stay invented. They may lurk in human culture for decades or even centuries, and be resurrected to become the bases of yet more discoveries, so that technology becomes richer, more complex and more efficient.

In the past, discoveries have tended overwhelmingly to become general property, rapidly, via exactly the same irrepressible social process whereby songs and jokes become general property. The genie does not always go back into the bottle and can turn up anywhere – precipitating further discoveries, always making more and yet more efficient use of natural phenomena, and revealing more about those phenomena, which yet more technologies can then use.

Biological evolution proceeds blindly, as it must, over vast epochs via small changes and sudden catastrophes. It contains prodigious numbers of dead ends: species that die out for ever, taking all their hard-won adaptations with them. Living species cannot borrow from each other: mammals could not adopt the excellent eyes developed (in the octopus) by molluscs; we had to develop our own eyes from scratch; so did the insects. Human technologies can and do borrow freely from each other, and in principle have no dead ends.

Unlike biological species, an abandoned technology can lie dormant for centuries and be resuscitated rapidly when conditions are right. With living things, there is no going back; the fossilized remains of extinct species, like ichthyosaurs and pterodactyls, can’t be resuscitated when the climate is favorable again. Darwinian evolution must plough forward, the only direction available to it, and create completely new creatures (dolphins, birds) based on the currently available stock of life forms (mammals, reptiles). But with technology we can always go back if we want to. For once, the arrow of time is under our control. Or should be.

Comparing Darwinian and technological evolution reveals an anomaly in the kind of innovation we see around us in the present computer age: here, technologies apparently can effectively disappear from the common pool, the way dinosaurs and other extinct species have done. Fairly large technologies can disappear abruptly, as soon as a feeling spreads among those who control their manufacture that the market for them might soon disappear, or even might become less attractive.

Or a technology may deliberately be kept out of the common pool, by someone who patents it in order to suppress it. Yesterday’s ideas may survive in documents, and for a while in human knowledge and skill, but they soon become very difficult to revive. ‘The show moves on.’ Premises and equipment are sold, staff are laid off and all the knowledge they had is dispersed; investors pull out and put their cash elsewhere; and products that once used the technology either die with it, or are laboriously redesigned to use alternatives. These extinctions help to create the determinist illusion that technology follows a single ‘best’ path into the future but, when you look at what caused these extinctions, fitness for purpose seldom has much to do with it.

No technology ought ever to die out in the way living organisms have done. It seems perverse to find Darwinian discipline not merely reasserted in a brand-new domain that should in principle be free of it, but in a turbo-charged form, unmitigated by the generous time-scales of Darwinian evolution. This market-Darwinism comes at us full pelt within ultra-compressed, brief, human time-frames. Where there should be endless choice, there is instead a march of progress that seems to have the same deterministic power as an avalanche.

But this is a fake avalanche. Every particle of it is guided by human decisions to go or not to go with the flow. These are avalanches that can be ‘talked back up hill’ – in theory and sometimes even in practice. Even in the absence of such an apparent miracle, deviation always remains an option, and is exercised constantly by the builders of technology. Indeed the market would have very little technological progress to play with if technologists did not continually evade its discipline, cross boundaries, and revisit technologies long ago pronounced dead. This becomes more and more self-evident, the more our technologies advance.

ARE SOCIETIES TECHNOLOGIES?

Brian Arthur begins to speculate on the possible range of things that might be called ‘technology’. He observes that science and technology are normally paired together, with science generally assumed to be technology’s precursor, or its respectable older brother. Yet he points out that human technology evolved to a very high level for centuries and even millennia before science existed, as we now understand it. And then he asks, is modern science a technology? It is a technique that, once discovered, has evolved in much the same way as specific technologies have done.

Taking this argument further, human nature is part of nature; we have various ways of exploiting it to particular purposes and, as we learn more about how people function, those ways become more and more refined.

Exploiters of humanity are avid students of human nature: they are eagle-eyed at spotting ways of coercing people to do things they do not wish to do and quick to adopt the latest research for purposes of persuasion. They know that human nature is what we make it. They make it fearful and obedient. We, however, know that human nature can be better than this. We know that human nature can take almost any form – but we also know, roughly at least, what kind of human nature we want. Should we not devise societies that will help us to be the kinds of people we aspire to be?

A key part of any Utopian project should be to discuss widely and think deeply about the human natures we want to have and the ones we do not want to have, and to devise the kinds of social arrangements that will support and reward those characteristics.

HUMANITY BEGAN WITH TECHNOLOGY

Economic policy is driven by an assumption that technology is something hard, shiny and baffling that emerged in the cut-and-thrust of late 18th-century northern Europe, and has since spread throughout the world from there, bringing a mix of great benefits and serious challenges that we take to be an inevitable concomitant of progress. It’s further assumed that the vehicle for this revolution was the capitalist company.

Taking Brian Arthur’s definition of technology as ‘a phenomenon captured and put to use’, it’s pretty clear that technology is a lot bigger than that, and a lot older than that. It’s now becoming apparent that the people of so-called ‘primitive societies’ were and are great and pioneering technologists – and none of today’s technologies would be conceivable without what they achieved (so the ‘giants’ whose assistance the great Isaac Newton modestly acknowledged were themselves ‘standing on the shoulders of giants’: the Human Pyramid itself).

Richard Rudgley, an anthropologist, has described the scale of these discoveries in a book published in 1998, Lost Civilisations of the Stone Age.11 Long before the first cities appeared, leaving their large and durable remains for the first archeologists to ponder over, humans in all parts of the world were developing highly efficient tools and techniques for making tools, had elaborate cuisines, were great explorers and expert navigators, artists and students of the natural world, including the sky. They even practiced surgery. We know this because evidence has been found in prehistoric remains from all over the world, of the challenging form of cranial surgery known as trepanning (to relieve pressure on the brain caused by blood clots); one of the few forms of surgery that leaves unambiguous skeletal evidence. It is reasonable to assume from this that they also knew many other kinds of surgery.

Martin Jones, a pioneer of the new techniques of molecular archeology, makes the point that humans are not even viable without at least minimal technology, such as fire. In his book Feast: Why Humans Share Food, Jones says that ‘human evolution may have something to do with reducing the costs of digestion’.12 Humans have relatively small teeth and jaws, and our guts are not long enough to cope well with a diet composed entirely of uncooked food. Cooking also neutralizes the toxins in many otherwise inedible plants, increasing the range of foods humans can use. All of this requires highly co-operative sociality – which is in turn facilitated by the large, anthropoid brain that became possible through reduced ‘metabolic expenditure’ on jaws and guts: a self-reinforcing feedback cycle that, at a certain point, produced the intensely sociable, essentially technological, highly successful human species. Humans, their technology and their distinctive social order all seem to appear simultaneously in the archeological record 100,000 or more years ago.

TECHNOLOGY EMERGES FROM EGALITARIAN KNOWLEDGE ECONOMIES

Throughout nearly all of their first 100,000 or so years, the dominant characteristic of human communities has been egalitarianism, and we can work out a lot about how these egalitarian societies functioned not only from the physical evidence they have left, but also from modern people who live radically egalitarian lives: today’s hunter-gatherer and foraging peoples. Many of these communities have brought the art of egalitarian living to a level of impressive perfection, and have independently developed many of the same social mechanisms for maintaining equality – particularly significant because they are so widely separated from each other, on the furthest and least-accessible margins of all the inhabited continents in the world. One of these characteristics, which almost everyone who meets them comments upon, is an unshakeable commitment to sharing knowledge. To borrow a useful phrase, they are the ultimate ‘knowledge economies’.

But there is much more to this than ‘sitting around all day talking’, which is what so many Europeans see when they come across indigenous communities. There is an extraordinary commitment to accuracy and truth. Hugh Brody – an anthropologist who has worked on land-rights campaigns with hunter-gatherer communities, and made documentaries with them – has reflected on this at some length in his book The Other Side of Eden.13 George Dyson, whose work on computer history will be mentioned later, has also written about the extraordinary technological traditions this kind of knowledge economy can support, in his book about the Aleuts and their kayaks, Baidarka.14 Aleut kayaks are made in some of the most resource-poor places on earth, and are technological miracles that defy long-accepted wisdom by travelling at speeds once considered theoretically impossible for a human-powered craft.

The hunter-gatherer knowledge economy also supports a healthier kind of person. Physically, hunter-gatherers have always been healthier and often taller than their civilized counterparts (see Chapter 3). Explorers and anthropologists constantly remark on their happiness and ‘robust mental health’. Brody attributes this to a complete absence of anxiety about being believed, or listened to, or being completely honest, or whether the other person is telling the truth. This has a utilitarian dimension – such societies simply cannot afford deceit and lives depend on absolutely accurate information – but it runs deep: this is how we evolved. Evolution made us radically honest people, and going against this hurts.

Wherever it is found, the egalitarian ethos is maintained through what another anthropologist, Christopher Boehm, identified as ‘counter-dominance’ strategies.15 We can readily recognize these at work everywhere in modern communities in the extensive repertoire of strategies for ‘taking someone down a peg or two’, ranging from friendly ribbing, to gossip, to ostracism and, in the extreme, to homicide. There is also the array of self-effacement strategies used by those who do not want to seem domineering: ‘honestly, it was nothing’; ‘I’m completely hopeless with computers’, etc. Even within the most hierarchical and unequal modern societies, personal life is lived as much as possible within egalitarian or would-be egalitarian social bubbles (families, peer groups, work-groups, neighbors and, in wartime and warlike situations, nations).

In fact, we seem to need these even more as societies become harsher and more stratified, and it is now gradually becoming recognized that the evils that arise from inequality are largely the effects of group inequality – ‘us’ against ‘them’.16 We gravitate towards groups where we can have this experience of solidarity and, what is more, we do it without being aware that we are doing so. This is why evil is so banal; why ordinary people who see themselves as decent folk (and are, in most situations) are capable of genocide.

Solidarity is a fundamental phenomenon of human nature – and dominant forces have learned down the centuries to exploit it. If technology is ‘a phenomenon captured and put to use’ then all our formal and informal social systems are some kind of technology, and ‘social engineering’ is what they do. We need social systems that maximize our chances of ‘not doing evil’, to borrow Google’s motto – which is precisely what Google’s practice of segregating its creative elite in pretend-Utopias, separate from the society around them, can’t possibly do.17

Theologian-turned-neuroscientist Heidi Ravven has documented the fairly new but already impressively large body of research into this phenomenon, and the vast and terrible historical evidence of its workings and effects, in her book The Self Beyond Itself. She concludes:

On the societal scale, our freedom lies in developing institutions and cultural beliefs and practices and families that shape our brains toward the greatest good rather than toward narrow interests, and toward health rather than addictive habits and other limitations, starting early in life.18

THE MYTH OF CREATIVE COMPETITION

In the Northern world, there has been a dominant idea that human nature is fundamentally competitive and individualistic. Innovation is said to be driven by the lure of wealth; hence, if we want nice things like iPhones, we need an unequal society, where there is a chance to get ahead. But when we actually see innovation in action, that is not how it works.

Some of the clearest refutations of the ‘spur of competition and profit’ argument come from the world of computers, with its egalitarian, collaborative origins and continuing culture. This has even inspired a wave of wishful thinking, to the effect that computers herald a new, egalitarian age. The social-science writer David Berreby has described computer programmers as ‘The hunter-gatherers of the knowledge economy’19 and identifies a long list of similarities between the new knowledge-workers’ behavior and value systems, and those of the hunter-gatherers described by anthropologists such as Christopher Boehm and Marshall Sahlins. ‘Can we win the joys of the hunter-gatherer life for everyone?’ he asks, ‘Or will we replicate the social arrangements of ancient Athens or medieval Europe, where freedom for some was supported by the worst kind of unfreedom for others?’

Technology’s history makes more sense if we recognize it as a constant, global, human activity, unconcerned with corporate or national boundaries, or the status systems within them. But as technologies became more powerful, elites became increasingly aware of them as threats or opportunities, and either suppressed them, or appropriated them and tried to channel their development in directions they found acceptable.

This fits better with innovators’ own experience. One hardly ever hears of an important innovation emerging from a boardroom or a chief executive’s office. Usually, the innovation emerges from an organization’s nether regions, or from outside any recognized organization. The innovator must laboriously build up evidence, gather allies, pay court to financiers and backers, and only then, on a good day with a following wind, perhaps attract the boardroom’s attention. Then, perhaps, the organization will adopt the innovation and perhaps, after modifications and compromises of various kinds, sell it to the world as yet another great product from Apple, Canon, or whoever.

More often than not the innovation is used, but without much appreciation. When the first, small capitalist states arose in 16th-century Europe, major innovations had quietly been emerging from within European towns, or making their way into Europe in informal ways, from China and India, for several centuries. The merchant elite did not acknowledge them officially until 1474, when the state of Venice started granting its first 10-year patents. To those who only look at the official record, this has suggested the start of a period of innovation, but 1474 more likely marked the beginning of the end of Europe’s great period of innovation – mostly achieved by anonymous, federated craftworkers. In a major study of medieval industries published in 1991, Steven Epstein wrote:

More than five centuries of increasingly effective patents and copyrights have obscured the medieval craft world in which such rights did not exist, where, to the contrary, people were obliged to open up their shops to guild inspection and where theft of technology was part of the ordinary practice of business.20

This allowed a capitalist myth to flourish, that there was no progress at all in either technology or in science in Europe from the end of the Roman Empire until the Renaissance. Lynn Townsend White, who became fascinated by this ‘non-subject’ in the early 1930s, wrote in 1978: ‘As an undergraduate 50 years ago, I learned two firm facts about medieval science: (1) there wasn’t any, and (2) Roger Bacon was persecuted by the church for working at it.’21

But between the 10th and 15th centuries, the stirrup, clockwork, glassmaking, the windmill, the compass, gunpowder, ocean-going ships, papermaking, printing and a myriad other powerful technologies were introduced or invented and developed under the noses of European elites, and were adopted and used by them greedily, ruthlessly and generally without comprehension. Many modern technologists and technology workers would say that little has changed.

Despite the contradictions, modern society is permeated by a belief that capitalism is pre-eminent when it comes to creating new technologies, and that computers and electronics have proved this beyond doubt. Even people on the Left say so. The sometime-socialist economist Nigel Harris has written of ‘the great technical triumphs of capitalism – from the steam engine and electricity to the worldwide web, air travel and astronauts’.22 He laments the environmental damage that seems to come with them, but he concedes that ‘markets and competing capital have a spectacular ability to increase output and generate innovations’.

An eminent Marxist, the geographer David Harvey, says: ‘The performance of capitalism over the last 200 years has been nothing short of astonishingly creative.’23 A moderately left-of-center commentator, Jonathan Freedland, argues that, even though capitalism has led to the climate crisis,

we would be fools to banish global business from the great climate battle… Perhaps capitalism’s greatest contribution will come from the thing it does best: innovation.24

The idea is even, apparently, central to the theories of Karl Marx and Frederick Engels. Their Communist Manifesto of 1848 contains what a highly respected Marxist scholar, Michael Burawoy, calls ‘a panegyric to capitalism’s power to accumulate productive forces’. The Manifesto says:

Subjection of nature’s forces to man, machinery, application of chemistry to industry and agriculture, steam navigation, railways, electric telegraphs, clearing of whole continents for cultivation, canalization of rivers, whole populations conjured out of the ground – what earlier century had even a presentiment that such productive forces slumbered in the lap of social labor?

But are Marx and Engels telling us that capitalism is a Good Thing? Of course not. They hated capitalism and expressed their hatred for it with vigor, relish and creativity. Marx continually alluded to its vampiric qualities (inspiring Mark Neocleous to call capitalism ‘the political economy of the dead’25). Marx often depicts capitalists as almost comical victims of circumstances. Capitalism, for Marx, is something like a natural phenomenon that hubristic entrepreneurs unleash but can barely control, still less understand. Marxism’s own parallel success story since 1848 surely stems to some extent from the way its explanation of grandiose capitalist behavior has rung so true, capturing the experience of so many millions of workers in so many different working situations.

But whatever Marxists think, conventional wisdom nowadays has it that capitalists are very wise, and that market competition between firms spurs innovation.

WHAT CAPITALISM CANNOT DO

A reputation for innovation started to become a valuable corporate asset around the time of the Second World War, and it has become almost an article of faith since then that modern, profit-driven capitalist firms, with their teams of highly motivated researchers, are the supreme exponents of technological innovation. Nonetheless, governments have occasionally felt the need to find out whether this really is the case or not.

A 1965 US Senate committee invited a succession of the leading authorities from all areas of industry to give them the benefit of their research into innovation, in an effort to decide whether the government should channel more of its research funding to large firms rather than small ones, and encourage business to concentrate into larger units, to foster a greater rate of innovation.26

The economist John Kenneth Galbraith, by no means an uncritical supporter of unfettered capitalism, had written not long before that ‘A benign providence… has made the modern industry of a few large firms an almost perfect instrument for inducing technical change’. Other eminent experts, such as the education theorist Donald Schön, disagreed, citing a major study called The Sources of Invention by a British research group headed by the Oxford economist, John Jewkes.27 This had seriously challenged the credibility of the corporate approach to major scientific challenges, with its emphasis on teamwork and targets – an approach equally prevalent both in the USSR and in the capitalist countries. Jewkes examined industries such as radar, television, the jet engine, antibiotics, human-made fibers, steel production, petroleum, silicones and detergents. The USSR came out badly from Jewkes’s study (no important innovations in any of the areas examined) but then, so did capitalist firms. In every area studied, innovation had dried up from the moment capitalist firms took a serious interest in it.

The Senate committee asked one of Jewkes’ co-authors, David Sawers, for an update on his study of the US and European aircraft industries. Sawers had found lots of growth, but not much serious innovation. US aviation had not come up with anything very new since the Second World War and it was still living off a few, mainly German, inventions that had been made in wartime. Almost none of the major advances in aircraft design, anywhere in the world, had come from private firms, and firms had been particularly resistant to jet propulsion. Jet airliners, he said, had only become established thanks to the US government underwriting the development costs and guaranteeing a military market for the Boeing 707. Major advances such as streamlining, swept-back wings, delta wings, and variable geometry all came from outside capitalist firms and had had a job being accepted by them – unless underwritten by military contracts. The only significant pre-War improvements made by capitalist firms that he had been able to find were the split flap (invented by Orville Wright, an old-school inventor, so not exactly representative) and the slotted flap (introduced by Handley Page in the UK). After the War, some modest innovation had been done by European aircraft firms on delta wings – the least adventurous of the new geometries (this work eventually led to the Concorde supersonic airliner, which was built largely at public expense, as a prestige project).

In the steel and automobile industries it was the same story: in general, no innovation except with lots of government support, or via the dogged persistence of independent inventors. In the photographic industry, Kodachrome (the first mass-market color film, launched in 1935) was literally invented in the kitchen sink by two musicians, Leopold Godowsky Jr and Leopold Mannes, in their spare time. The two men had struggled at the project largely at their own risk since 1917.

Sawers’ colleague, Richard Stillerman, put it thus:

Making profits is the primary goal of every firm. Few, if any, firms would support the kind of speculative research in manned flight undertaken by the Wright Brothers at the turn of the century after experts proclaimed that powered flight was impossible. Or the risky experiments on helicopters which a horde of optimistic individuals carried forward over several decades. Or the early rocket research pursued by individuals with limited financial backing.28

Turning to electronics, the committee learned that one of the industry’s greatest success stories, xerography (the technology behind the huge Xerox Corporation), had only seen the light of day after its inventor, Chester Carlson, approached the non-profit Battelle Memorial Institute for support. Other major innovations had been actively resisted by the firms in which they were being developed. Arthur K Watson (son of Thomas J Watson, founder of the IBM Corporation) was quoted to the effect that:

The disk memory unit, the heart of today’s random access computer… was developed in one of our laboratories as a bootleg project – over the stern warnings from management that the project had to be dropped because of budget difficulties. A handful of men ignored the warning … They risked their jobs to work on a project they believed in.29

The committee learned that talented researchers were fleeing capitalist firms to set up or join small, independently funded outfits where they could develop their ideas without interference. The ‘small startup’ subsequently became one of the iconic conventions of the electronics/computer industry and was touted as a great success, but small startups didn’t, don’t and can’t carry out the sustained research effort publicly funded teams are capable of. While some get rich, the vast majority do not.30

INNOVATION IN THE ‘NEW ECONOMY’

In the early 1980s the British government faced concerns about the country’s ability to compete in what was tentatively being called ‘the new economy’. Not all experts shared the government’s deep conviction that more intense commercial competition would deliver the requisite innovations, so they commissioned two US academics to nail the matter: Nathan Rosenberg and David Mowery.

Mowery examined nine major pieces of research into industrial innovations that claimed to have been inspired by market demand. On close inspection, he found that, while it was true that they had arisen within firms, most of them had been the fruit of researchers following their own interests, and ‘the most radical or fundamental ones were those least responsive to “needs”’.31 He also explained that most of the key innovations in computing and electronics had happened well outside the reach of the market: in universities and government research organizations. He concluded that ‘while one may rely upon the ordinary forces of the marketplace to bring about a rapid diffusion of an existing innovation with good profit prospects, one can hardly rely completely upon such forces for the initial generation of such innovations’.

Another big study, published in 2004 by Daniel Cohen and colleagues for the Rodolfo Debenedetti Foundation,32 also found ‘scant’ evidence of any link between competition and innovation,33 although any innovations that were adopted did seem to be diffused more rapidly by competition – the same point that Mowery had made 20 years earlier.

The study also found that the innovations carried out by individual firms tended to consist of adding ‘tweaks’ of their own, preferably patentable ones, designed to secure a positional edge in the market and perhaps monopolize some particular aspect of it. The story of the World Wide Web (now known simply as ‘the Web’) is a classic demonstration of how difficult it then becomes to preserve the innovator’s original vision, and the surprisingly large environmental cost of departing from it – this is described in detail in Chapter 10.

WHY CAPITALISM INHIBITS INNOVATION

The 19th-century artist, writer and socialist William Morris described capitalist competition as ‘a mad bull chasing you over your own garden’34 – a view now supported by a wealth of research. Governments and firms started to take interest in creativity as the pace of technological change accelerated after the Second World War. Creativity became a major area of study, which has consistently found that even mildly competitive environments and situations of unequal power are completely incompatible with creative thinking.35

In one of many experiments described in a popular book about creativity by Guy Claxton, two groups of non-golfers were taught putting.36 Both were instructed in exactly the same way, but one group was told that they’d be inspected at the end of the course by a famous professional. This group’s performance (measured in balls successfully ‘sunk’) was far lower than the other group’s. The small anxiety of knowing a professional would be watching them devastated their ability to learn.

Firms became very keen on using the new research to teach their staff to be more creative, and some psychologists found creative ways to teach creativity, without challenging the creativity-stifling structures of the firms that paid them. Brainstorming, team away-days and the creative enclaves mentioned earlier, are all products of this ‘creativity movement’.

Claxton has an illuminating anecdote about George Prince, co-founder of a popular ‘creativity-enhancing’ system called Synectics. To Prince’s chagrin, his own research led him to realize that the business context made his enterprise hopeless:

Speculation, the process of expressing and exploring tentative ideas in public, made people, especially in the work setting, intensely vulnerable, and that… people came to experience their workplace meetings as unsafe.

People’s willingness to engage in delicate explorations on the edge of their thinking could be easily suppressed by an atmosphere of even minimal competition and judgement. ‘Seemingly acceptable actions such as close questioning of the offerer of an idea, or ignoring the idea … tend to reduce not only his speculation but that of others in the group.’ 37

Even positive motivation stupefies – in particular, financial reward. In 1984 James Moran and his colleagues not only showed that reward impaired performance, they also discovered how it impaired performance: essentially by causing ‘a primitivization of psychological functioning’. The subjects regressed in effect to childhood, and performed below their mental age.38

After the 2007/8 financial crisis, the Nobel economics laureate and psychologist Daniel Kahneman wrote a global bestseller, Thinking, Fast and Slow, which explained exactly why performance bonuses do not and cannot work – and why the decisions of even the brightest corporate leaders are generally governed more by luck and delusion than by genius.39

Hierarchy invokes a different mindset from the one we enjoy among equals, and in which we are most productive and creative. In his 2009 book, The Master and His Emissary: the Divided Brain and the Making of the Western World, Iain McGilchrist has shown how this ties in with the human brain’s well-known (but subtle) ‘lateralization’. The brain’s two halves have somewhat different functions but the most important difference is one of style. In simple terms: ‘The right hemisphere has an affinity for whatever is living, but the left hemisphere has an equal affinity for what is mechanical’.40 The two halves need to work in tandem but the peculiar dynamics of Western society create over-reliance on the left hemisphere, with its more intense, focused approach, resulting in:

a pathological inability to respond flexibly to changing situations. For example, having found an approach that works for one problem, subjects [who have suffered damage to the right side of the brain] seem to get stuck, and will inappropriately apply it to a second problem that requires a different approach – or even, having answered one question right, will give the same answer to the next and the next.41

In his concluding chapter, McGilchrist imagines what the world would look like to a brain consisting of nothing but its left hemisphere, and it looks a lot like the world where so many of us have to live and work: it recognizes skill only in terms of what can be codified, is obsessed with details at the expense of the broader picture, has little empathy. In this not-entirely-hypothetical world:

Fewer people would find themselves doing work involving contact with anything in the real, ‘lived’ world. Technology would flourish, as an expression of the left hemisphere’s desire to manipulate and control the world for its own pleasure, but it would be accompanied by a vast expansion of bureaucracy, systems of abstraction and control. The essential elements of bureaucracy, as described by Peter Berger and his colleagues [in the book The Homeless Mind, 1974], show that they would thrive in a world dominated by the left hemisphere.42

Capitalist industry is aware something is amiss, but cannot contemplate the obvious solution (stop being capitalist) and it is not in the creativity consultant’s interest to point it out either.

CAPITALISM DIDN’T MAKE COMPUTERS…

The notion that computers demonstrate capitalism’s creativity is finally belied by the computer’s history. Capitalism ignored computers for more than a century, and we might still be waiting for them now, had it not been for brief moments of egalitarian collaboration, grudgingly or accidentally tolerated by the elites, during the worst crises of the Second World War. And even then, capitalism had to be laboriously spoon-fed the idea for a further decade and more, before it would invest its own money in it. Mariana Mazzucato’s book The Entrepreneurial State reveals the enormous scale of capitalist industry’s dependence on publicly funded research. One of her case studies is Apple’s iPhone, which would amount to nothing very much without the billions of dollars’ worth of research effort that produced everything from its fundamentals (the microprocessors, computer science itself) to its most modern-looking features: the touch screen, the Global Positioning System (GPS), its voice-activated SIRI ‘digital assistant’, and the internet itself: all the ‘features that make the iPhone a smartphone rather than a stupid phone’.43

Programmable, digital computers in the modern sense (but using mechanical gears rather than electronic valves) had been possible since the 1830s, when the British mathematicians Charles Babbage and Ada Lovelace were developing the science behind them. Babbage even developed much of the necessary hardware which, when partially completed (using 1830s tools and methods) in the 1980s, worked perfectly well. By most logical considerations, there was abundant need for Babbage’s machines in Britain’s burgeoning commercial empire – but human labor was dirt cheap and British commerce simply wasn’t interested in them.44

Computer scientist and historian Brian Randell, founder of the IEEE’s Annals of the History of Computing, found other perfectly viable ideas for computers that should have been snapped up by a capitalist system that actually did what it claimed to do, namely foster innovation and drive technological progress. One of his discoveries was Dublin accountant Percy Ludgate’s 1907 proposal for a portable computer that could have multiplied two 20-digit numbers in less than ten seconds, using electric motor power. Its specification included the ability to set up sub-routines. Ludgate’s work attracted great interest in the learned societies in Dublin, London and internationally, and the British Army eventually hired him to help plan logistics during the First World War, but he drew a complete blank with the businesses that would have benefited most from his invention.45

The Smithsonian Institution’s computer historian, Henry Tropp, has written:

We had the technical capability to build relay, electromechanical, and even electronic calculating devices long before they came into being. I think one can conjecture when looking through Babbage’s papers, or even at the Jacquard loom, that we had the technical ability to do calculations with some motive power like steam. The realization of this capability was not dependent on technology as much as it was on the existing pressures (or lack of them), and an environment in which these needs could be sympathetically brought to some level of realization.46

It took the exceptional circumstances of a second global war, topped by the threat of a nuclear one, to nudge governments and managements in the advanced nations into providing or at least tolerating briefly the kinds of environments where ‘these needs could be sympathetically brought to some level of realization’: highly informal settings where machines like the Colossus were built (by Post Office engineers for the British code-breaking center at Bletchley Park, to break German high-command codes – but scrapped and erased from the official record immediately afterwards47), and the ‘Electronic Numerical Integrator And Computer’ (ENIAC), designed in Philadelphia for calculating gunnery tables, and completed in 1946.

Colossus was the world’s first true, programmable, digital electronic computer, and it owed its existence to suspension of ‘business as usual’ by the threat of military defeat, which also briefly overshadowed the normal regime of homophobia and snobbery. Bletchley Park’s codebreaking genius, the mathematician Alan Turing, was tolerated while hostilities lasted despite his homosexuality and awkwardness. Colossus was built by a team of five working-class General Post Office (GPO) engineers who would never have been allowed near such an important project in normal times (they very nearly weren’t anyway, and certainly weren’t as soon as the War ended).

The GPO team was led by TH (Tommy) Flowers, a bricklayer’s son from the east end of London. While in his teens, Flowers had earned an electrical engineering degree by night while serving a tough engineering apprenticeship during the day. By 1942 he was one of the few people in the world with a practical and theoretical knowledge of electronics, and an imaginative grasp of its possibilities. Management referred to Flowers as ‘the clever cockney’, and tried to get him off the project, but he and Turing got on well from the first, and Turing made sure the project went ahead despite the opposition and sneers. One of Flowers’ team, SW Broadhurst (a radar expert who had originally joined the GPO as a laborer) took it all without complaint. Computer scientist Brian Randell recorded this impression from him in 1975:

The basic picture – a few mathematicians of high repute in their own field accidentally encounter a group of telephone engineers, of all people… and they found the one really enthusiastic expert in the form of Flowers, who had a good team with him, and made these jobs possible, with I think a lot of mutual respect on both sides. And the Post Office was able to supply the men, the material and the maintenance, without any trouble, which is a great tribute to the men and the organization.48

Turing’s biographer and fellow-mathematician, Andrew Hodges, records that Colossus was built with extraordinary speed and worked almost perfectly, first time: ‘an astonishing fact for those trained in the conventional wisdom. But in 1943 it was possible both to think and do the impossible before breakfast.’49 The GPO team worked so fast that much of the first Colossus ended up being paid for, not by the government, but by Flowers himself, out of his own salary.50

But as soon as the War was over these men were sent back to their regular work and could make no further contribution to computing, or even talk about what they had done, until the secret finally emerged in the 1970s. As Flowers later told Randell: ‘It was a great time in my life – it spoilt me for when I came back to mundane things with ordinary people.’

…BUT TOOK COMPUTING DOWN THE WRONG PATH

The ENIAC was nearly a dead end for almost the opposite reason. Its designers, Presper Eckert and John Mauchly, were so intent on being successful capitalists that they nearly buried the project themselves. Unlike Colossus, ENIAC gained public recognition but, as the Dutch computer scientist and historian Maarten van Emden has argued, the rapid commercialization that its makers had in mind could have turned it into a ‘revolution that didn’t happen’51 had it not been for the fortuitous and somewhat unwelcome involvement of the Hungarian mathematician John von Neumann who (by further fortuitous connections, including discussions years earlier with Alan Turing in Cambridge) was able to relate what he saw to other, apparently quite unrelated and abstruse areas of mathematics and logic.

Eckert and Mauchly seem not to have understood von Neumann’s idea which, nonetheless, von Neumann was able to publish, to their annoyance, free of patent restrictions, in the widely circulated report on ENIAC’s successor, the EDVAC. This report effectively kept computer development alive, and out of the hands of normal capitalist enterprise, which (as Van Emden argues) would then have smothered it. He writes that:

Without von Neumann’s intervention, Eckert and Mauchly could have continued in their intuitive ad-hoc fashion to quickly make EDVAC a success. They would also have entangled the first stored-program computer in a thicket of patents, one for each ad hoc solution. Computing would have taken off slowly while competitors chipped away at the initial monopoly of the Eckert-Mauchly computer company. We would not have experienced the explosive development made possible by the early emergence of a design that, because of its simplicity and abstractness, thrived under upheaval after upheaval in electronics.

Von Neumann’s idea – the ‘von Neumann architecture’ – specified a central unit for doing arithmetic; a memory store shared by the program instructions, the data to be worked on and intermediate results; and a control unit to initiate each step of the program, copying data and instructions alternately from and back to memory in a ‘fetch-execute cycle’, as well as receiving input from a keyboard (or other input) and passing it back to a printer (or other output). The system is robust and comprehensible because it does just one thing at a time, in step with a timing pulse or ‘clock’. This turned out to have surprisingly expensive consequences, as computers began to be applied to tasks never envisaged in the 1940s: taking photographs and movies, playing music, and so on… as we shall see later on, in Chapter 11.

Von Neumann’s design is still the basis of nearly all modern computers, and the whole computer revolution might not have happened, had it not been for his freakishly broad interests, his unwanted intervention in Eckert and Mauchly’s business, and then his airy disregard of commercial propriety in circulating his specification. This proved a lucky break for capitalism, despite itself.

More and more powerful machines became possible thanks to von Neumann’s disruptive presence, but capitalist firms still resolutely had nothing to do with their development unless all of the costs were underwritten by governments. As for using computers themselves, they had to be coaxed endlessly, like recalcitrant children, before they would even try what was good for them. Computer development remained utterly dependent on government support for decades.

In his 1987 study for the Brookings Institute, Kenneth Flamm estimated that in 1950 more than 75 per cent of US computer development funding had come from the government, and any commercial investments were largely made in anticipation of lucrative defense contracts. One such contract financed development of IBM’s 701 machine – originally known as ‘The Defense Calculator’. IBM’s commitment to computers was built on guaranteed returns from military projects. A decade later, in 1961, the US government was still funding twice as much computer research as the private sector did, and this remained largely the pattern through the Cold War era. Another historian, Paul Edwards, has wondered whether digital computers would even have survived had it not been for the Cold War.52

Personal computers (from which iPhones and their like are descended) might have remained a quaint, hobbyist idea had today’s commercial norms been in place in the 1970s. The idea was shaped to a great extent by political activists opposed to big business and the military,53 and the world of business scorned them until the first computer spreadsheet (the ‘magic piece of paper’ that recalculates your sums for you, when you change any of the figures) appeared in 1979. This was Dan Bricklin’s Visicalc, which he wrote for the Apple II computer, giving it a desperately needed foothold in the business market. Bricklin did not patent the spreadsheet idea – and could not have done until two years later, by which time the idea had been picked up by other software companies. The excitement about spreadsheets contributed to IBM’s hurried but decisive decision to enter the personal computer market in 1981.

When firms finally discovered the computer’s benefits, there was a competitive frenzy. As I will argue in later chapters, this led to a wholesale restructuring and concentration of the economy that somehow yielded very little beneficial effect on standards of living, but a great increase in human inequalities and impacts.

Many firms fell by the wayside along with whole areas of employment. So, too, did whole areas of technical possibility. As firms started to make money from selling computers, competitive development proceeded at such breakneck speed that attempts to open up new architectural possibilities were bypassed before they could be made ready for general use. We will look at some of these in later chapters.

Market forces effected a swift and radical simplification of what people thought of as ‘the computer’, forcing its development to be channelled along a single, very narrow path. As we will see, von Neumann architecture became the ‘only show in town’ – a development that would have made von Neumann himself despair, and which has had surprising environmental consequences. The rich diversity of technologies that characterized computing before that time evaporated, leaving a single, extremely inefficient technology to serve everything from mobile phones to audio equipment to financial markets. The computer revolution, which promised to enrich human lives and reduce human impacts, and could certainly have done so, in the event did the exact opposite.

1 ‘Another suicide at Foxconn after boss attempts damage control’, China Labour Bulletin, 27 May 2010, nin.tl/FoxconnCLB (retrieved 01/06/2010).

2 Nick Cohen, ‘How much do you really want an iPad?’ The Observer, 30 May 2010.

3 Milanovic, Williamson and Lindert, ‘Measuring Ancient Inequality’, NBER working paper, October 2007.

4 George Packer, ‘Change the World: Silicon Valley transfers its slogans – and its money – to the realm of politics’, The New Yorker, 27 May 2013.

5 Glynn Moody, Rebel Code: Linux And The Open Source Revolution. Basic Books, 2009, p 28.

6 GNU is a ‘recursive acronym’ for ‘GNU’s not Unix’. Unix, although a proprietary product, was also largely the work of another lone programmer, Ken Thompson, who wrote it in his own time, against the wishes of management, in 1969.

7 Moody, op cit.

8 W Brian Arthur, The Nature of Technology: What It Is and How It Evolves, Free Press, 2009.

9 J Gleick, The Information: A History, a Theory, a Flood, HarperCollins Publishers, 2011.

10 Arthur, op cit., p 25.

11 Richard Rudgley, Lost Civilisations of the Stone Age, Century, London, 1998.

12 Martin Jones, Feast: Why Humans Share Food, Oxford University Press, 2007.

13 Hugh Brody, The Other Side of Eden, Faber, London, 2001.

14 George Dyson, BaidarkaAlaska Northwest Pub Co, 1986.

15 Christopher Boehm, Hierarchy in the Forest: the evolution of egalitarian behavior, Harvard University Press, 2001.

16 M Sidanius & F Pratto, Social Dominance: an intergroup theory of social hierarchy and oppression, Cambridge University Press, 1999.

17 George Packer, ‘Change the World’, The New Yorker, 27 May 2013, nin.tl/packerNY.

18 Heidi M Ravven, The Self Beyond Itself, New Press, 2013, p 299.

19 David Berreby ‘The Hunter-Gatherers of the Knowledge Economy,’ Strategy and Business 16 July 1999.

20 Steven A Epstein, Wage Labor and Guilds in Medieval Europe, University of North Carolina Press, 1991, p 245.

21 Lynn Townsend White, Medieval religion and technology: collected essays, Publications of the Center for Medieval and Renaissance Studies, University of California Press, 1978, pp x-xi.

22 Nigel Harris, ‘Globalisation Is Good for You’, Red Pepper, 3 Dec 2007, nin.tl/HarrisRP

23 David Harvey, The Enigma of Capital, Profile, London, 2010, p 46.

24 Jonathan Freedland, The Guardian, 5 Dec 2007, nin.tl/Freedland07

25 Mark Neocleous, ‘The Political Economy of the Dead: Marx’s Vampires’, History of Political Thought, Vol 24, No 4, 2003, pp 668-84.

26 Economic Concentration, ‘Hearings before the subcommittee on antitrust and monopoly of the committee of the judiciary’, US Senate, 89th Congress, First Session, 18, 24, 25 and 27 May and 17 June 1965.

27 John Jewkes, The Sources of Invention. Macmillan, London/St Martin’s Press, NY, 1958.

28 Economic Concentration, op cit, p 1075.

29 Ibid, p 1217.

30 ‘Silicon Valley’s culture of failure … and ‘the walking dead’ it leaves behind’, Rory Carroll, The Guardian, 28 June 2014, nin.tl/Siliconfailure

31 N Rosenberg, Inside the Black Box: Technology and Economics, Cambridge University Press, 1983, p 229.

32 Daniel Cohen, P Garibaldi and S Scarpetta. The ICT Revolution: Productivity Differences and the Digital Divide, Oxford University Press US, 2004.

33 Ibid, p 85.

34 William Morris, How we live and how we might live, 1884, 1887.

35 Creativity. Selected readings, ed Philip E Vernon, Penguin, 1970.

36 Guy Claxton, Hare Brain, Tortoise Mind, Fourth Estate, London, 1998.

37 Ibid, pp 77-78.

38 James D Moran, ‘The detrimental effects of reward on performance’, in Mark R Lepper and David Greene, The Hidden Costs of Reward, Erlbaum 1978; James D Moran and Ellen YY Liou, ‘Effects of reward on creativity in college students of two levels of ability’, Perceptual and Motor Skills, 54, no 1, 1982, 43–48.

39 Daniel Kahneman, Thinking, Fast and Slow, Farrar, Straus and Giroux, 2011.

40 Iain McGilchrist, The Master and His Emissary, Yale University Press, 2009, p 55.

41 Ibid, pp 40-41.

42 Ibid, p 429.

43 Mariana Mazzucato, The Entrepreneurial State: debunking public vs. private sector myths, 2014. Mazzucato published an earlier, shorter version for Demos in 2011, which may be downloaded free from their website: nin.tl/Mazzucato

44 J Gleick, The Information: A History, a Theory, a Flood, HarperCollins, 2011.

45 Brian Randell, ‘Ludgate’s analytical machine of 1909’, The Computer Journal, vol 14 (3) 1971, pp 317-326.

46 Henry Tropp, quoted in B Winston, Media Technology and Society, a history: from the telegraph to the internet, Routledge, London, 1998.

47 Andrew Hodges, Alan Turing: The Enigma, Simon and Schuster, 1983; G Dyson, Turing’s Cathedral, Knopf Doubleday, 2012; Gordon Welchman, The Hut Six Story, McGraw-Hill, 1982.

48 Brian Randell, ‘The Colossus’, in A History of Computing in the Twentieth Century, ed N Metropolis, J Howlett and G-C Rota, Academic Press, 1980.

49 Hodges, op cit, p 268.

50 Randell, ‘The Colossus’, op cit.

51 Maarten van Emden, ‘The H-Bomb and the Computer, Part II’, A Programmers Place, accessed 19 Aug 2014, nin.tl/EmdenHBomb

52 PN Edwards, The Closed World, MIT Press, 1996.

53 John Markoff, What the Dormouse Said: How the Sixties Counterculture Shaped the Personal Computer Industry, Viking, 2005.

The Bleeding Edge

Подняться наверх