Читать книгу Book Wars - John B. Thompson - Страница 10
INTRODUCTION
ОглавлениеAndy Weir couldn’t believe his luck. He always wanted to be a writer and he started writing fanfiction when he was 9. But, being a sensible young man, he doubted he could make a living as a writer, so he trained to be a software engineer and became a computer programmer instead. As a resident of Silicon Valley, this turned out to be a wise decision, and he had a successful career as a programmer for twenty-five years. But he never gave up his dream of being a writer and he continued to write stories in his spare time. He even had a go in the late 1980s at writing a book and trying to get it published, but no one was interested: ‘It was the standard struggling author’s story, couldn’t get any interest – publishers weren’t interested, no agent wanted to represent me, it just wasn’t meant to be.’ Undeterred, Andy continued to write in his spare time – writing was his hobby. As the internet became more prevalent in the late 1990s and early 2000s, he set up a website and began posting his stories online. He had a mailing list that people could sign up to, and he sent them an email whenever he posted a new story. Over a period of ten years, he gradually built up a list of some 3,000 email addresses. Then he started writing serial fiction, posting a chapter at a time on his website and letting his readers know. One of these stories was about a manned space mission to Mars. Being a software engineer, Andy was interested in problem-solving, and he began to think, ‘OK, what if something goes wrong, how do we make sure the crew survives? What if two things go wrong, what do we do then? And suddenly I realized I had a story.’ He wrote in the evenings and at weekends, whenever he had spare time and felt the urge, and when he finished a chapter he posted it on his website. His readers became very engaged in the story and picked him up on some of the technical details about the physics or the chemistry or the maths of a manned mission to Mars, and he would go back and fix it. This active engagement with his readers spurred him on. Chapter by chapter, the story unfolded of an unfortunate astronaut, Mark Watney, who had been knocked unconscious by a violent dust storm shortly after arriving on Mars and woke up to discover that his crewmates had taken him for dead and made an emergency escape without him, leaving Mark alone to survive indefinitely on a remote planet with limited supplies of food and water and no way to communicate with Earth.
After the last chapter of The Martian had been posted on his website, Andy was ready to move on to another project, but he started getting emails from some of his readers saying, ‘Hey, I really love The Martian but I hate reading it in a web browser. Can you make an e-reader version?’ So Andy figured out how to do that – it wasn’t too hard for a software engineer – and he posted an ePub and a Mobi file on his website so that people could download it for free. Then he started getting emails from people saying, ‘Thanks, I really appreciate that you put up e-reader formats, but I’m not very technically savvy and I don’t know how to download a file from the internet and put it on my e-reader. Can you just put it up as a Kindle?’ So Andy did that too – filled in the form on Amazon, uploaded the file and, presto, there it was on the Amazon site, now available as a Kindle ebook. Andy wanted to give it away for free but Amazon require you to put a price on your ebook, so he chose the lowest price that Amazon allowed, 99¢. He sent an email out to his readers and said, ‘There you are everybody, you can read it for free on my website, you can download the free ePub or Mobi version from my website or you can pay Amazon a buck to put it on your Kindle for you’, and to his surprise more people bought it from Amazon than downloaded it for free. The ebook swiftly moved up Amazon’s bestseller list, reaching number one in the sci-fi category and staying there for quite some time. Pretty soon the book was selling about 300 copies a day, but, having never published a book before, Andy had no idea whether this was good, bad or indifferent. He was just pleased that it was getting good customer reviews and lingering in the number one spot for sci-fi on Kindle.
Then something happened that he never expected. One day he got an email from an agent who said, ‘I think we could get your book into print and if you don’t have an agent, I’d like to represent you.’ Andy couldn’t believe it. Some years earlier, he had written to agents all over the country, begging them to represent him, and no one wanted to know. Now he gets an email out of the blue from an agent who is offering to represent him, and he didn’t even have to ask. ‘I’m like, wow.’
What Andy didn’t know at the time is that, 3,000 miles away in New York, a science-fiction editor at Crown, an imprint of Random House, had been browsing around some of his favourite internet sci-fi sites, as he did from time to time when things were a little slow, and he had come across several mentions of The Martian, so he decided to check it out. He noticed it was number one on the Kindle sci-fi bestseller list and it had lots of good customer reviews, so he bought a copy, dipped into it and liked what he read, though he wasn’t sure what to make of all the hard science. He had a phone call lined up with an agent friend of his and, in the course of the conversation, he mentioned the book to him, told him he’d been tracking it on Amazon and suggested he take a look and let him know what he thought. He did, loved it (‘I was just blown away by it’ – the hard science appealed to his geeky nature), got in touch with Andy and signed him up. This was an agent who was accustomed to finding new authors online, sometimes by reading an interesting article on the internet and getting in touch with the author, sometimes by coming across a self-published book on Amazon that looked interesting, so he knew how to navigate this terrain. Out of courtesy to the editor who had called this book to his attention, the agent got back in touch with him and gave him a little time to consider it as an exclusive. The editor sent it around to a few of his colleagues at Crown and asked them to look at it over the weekend; they liked it too, and on Monday they made a generous offer to pre-empt the book and take it off the table. Andy was thrilled and the deal was done. ‘It was a no-brainer’, said Andy; ‘it was more money than I make in a year in my current job, and that was just the advance.’
At around the same time, a small film production company had also spotted The Martian on the Kindle bestseller list and got in touch with Andy, who put them in touch with his new agent. The agent contacted his film co-agent and they used the interest of the small production company to pique the interest of Fox, who snapped up the film rights and announced that the movie would be directed by Ridley Scott with Matt Damon in the lead. With publishing rights now sold to Random House and a Hollywood blockbuster in the works, the scouts began to work their magic with foreign publishers. The buzz machine was spinning and it ramped up quickly. Before long, rights were sold in thirty-one international territories and Andy’s substantial advance was earned out before the book was even published.
To Andy, who was oblivious to these distant conversations, the sudden interest in his book seemed somewhat unreal. He was at work the week that the deals with Random House and Fox were done, in his programming cubicle as usual, and he had to go to a conference room to take a call about the movie deal. ‘It’s like, hey, out of nowhere, all of your dreams are going to come true. It was so unbelievable that I literally didn’t believe it. I hadn’t actually met any of these people, it was all just emails and phone calls, and in the back of my mind I kept thinking, “This might just be a scam.”’ It only hit home when the contract finally arrived and the return address was Random House, 1745 Broadway, New York, NY, and then the cheque for the advance arrived. ‘I thought, “If this is a scam, they’re very bad at it.”’
Once the deal with Random House was done, Andy was asked to take down the Kindle edition, which he did. The text was lightly edited and then sent out to various prominent authors for pre-publication blurbs – the responses were amazing. An array of well-known sci-fi authors raved about this new addition to their genre. All of this helped the editor to get people talking about the book, generate excitement inside the house and encourage the sales reps to get behind the book and push it when they met with the buyers at the major retail outlets – critical factors in the attempt to make a book stand out from the thousands of new titles that are published every week. The Random House edition of The Martian was eventually published as a hardcover and ebook in February 2014 and went straight onto the New York Times bestseller list, where it remained for six weeks. A glowing review in the Wall Street Journal described it as ‘utterly compelling … This is techno sci-fi at a level even Arthur Clarke never achieved.’ The paperback edition was released in October 2014 and again went quickly onto the New York Times bestseller list, reaching the number one spot and remaining on the list well into 2015.
There was something remarkable and unprecedented about Andy’s success: through a series of metamorphoses, a text that started life as a blog on someone’s personal website ended up as an international bestseller and a blockbuster film and, with it, a life and a career were transformed. A generation earlier, none of this would have been possible and a talent like Andy’s might well have gone undiscovered. That was one of the many upsides of the digital revolution in publishing: thanks to the internet, talent could be discovered in new ways and a writer who had been beavering away in relative obscurity could suddenly be catapulted into international stardom. Everyone gains – writer, publisher, millions of readers all over the world. But, remarkable though Andy’s success was, this was only one side of the story. The very changes that had enabled Andy to realize his childhood dream were wreaking havoc in an industry that had operated in pretty much the same way for as long as anyone could remember. The industry by which Andy was so pleased to be embraced had, largely unbeknown to Andy, become a battleground where powerful new players were disrupting traditional practices and challenging accepted ways of doing things, all facilitated by a technological revolution that was as profound as anything the industry had experienced in the five centuries since Gutenberg. The astonishing success of The Martian – from blog to bestseller – epitomizes the paradox of the digital revolution in publishing: unprecedented new opportunities are opened up, both for individuals and for organizations, while beneath the surface the tectonic plates of the industry are shifting. Understanding how these two movements can happen simultaneously, and why they take the form that they do, is the key to understanding the digital revolution in publishing.
The digital revolution first began to make itself felt in the book publishing industry in the 1980s. At this time, the world of Anglo-American trade publishing was dominated by three sets of players that had become increasingly powerful in the period since the 1960s: the retail chains, the literary agents and the publishing corporations.1 The rise of the retail chains began in the US in the late 1960s with the emergence of B. Dalton Booksellers and Waldenbooks, two bookselling chains that took root in the suburban shopping malls that were becoming increasingly prevalent at that time, as the middle classes moved out of city centres into the expanding suburbs. In the course of the 1970s and 1980s, these mall-based bookstores were eclipsed and eventually absorbed by the so-called book superstore chains, especially Barnes & Noble and Borders, which competed ferociously with one another throughout the 1980s and 1990s as they rolled out their superstores across America. Unlike the mall-based bookstores, these book superstore chains located their stores in prime city locations with large floor areas to maximize stock-holding capacity. The stores were designed as attractive retail spaces that would be welcoming and unthreatening to individuals who were not accustomed to going into a traditional bookstore – clean, spacious, well-lit spaces with sofas and coffee shops, areas to relax and read and no need to check in bags as you entered or left the store. Similar developments occurred in the UK with the rise of Dillons and Waterstones, two book retail chains that competed with one another and with WH Smith, the general high-street newsagent and stationer, in the 1980s and 1990s, until Dillons was eventually absorbed into Waterstones.
The result of these and similar developments (such as the increasing role of mass merchandisers and supermarkets as retail outlets for books) was that, by the late 1980s and early 1990s, a substantial proportion of books published by trade publishers were being sold through retail chains that, between them, controlled a large and growing share of the market. The market share of the retail chains put them in a very strong position when it came to negotiating terms with publishers, as the scale of their commitment to a book, and whether they were willing to feature it in front-of-store displays and at what cost, could make a big difference to the visibility and success of a title. The independent bookstores, by contrast, experienced a steep decline. Hundreds were forced into bankruptcy during the 1990s, unable to compete with the extensive stock range and aggressive discounting of the large retail chains. That was the retail setting of the book trade when a small internet start-up called Amazon opened for business from a suburban garage in Seattle in July 1995.
The second key development that shaped the field of Anglo-American trade publishing in the late twentieth century was the growing power of literary agents. Of course, literary agents were not new – they had been around since the late nineteenth century. But for much of the first century of their existence, literary agents had understood their role as intermediaries who were bringing together authors and publishers and negotiating deals that both parties would regard as fair and reasonable. This self-conception of the literary agent began to change in the 1970s and early 1980s as a new breed of agent – what I call the super-agent – began to appear in the publishing field. Unlike most agents, many of whom had previously worked in publishing houses, the new super-agents were outsiders in the world of publishing and were not attached to the traditional practices of literary agents. They understood the role of the agent in a more legalistic way, not so much as intermediaries but rather as advocates of their clients’ – the authors’ – interests. They were prepared to fight, and to fight hard, to maximize the returns to the authors they represented. They didn’t care whether they ruffled the feathers of the big publishers: good public relations were not part of the role of an agent as they understood it. They knew that there was plenty of money to be made in the publishing business, especially with the massive expansion of bookselling capacity that was being created by the rise of the retail chains, and they believed that authors should get their fair share. They also knew that publishers would not hand out large advances and give better terms to authors unless someone was prepared to fight for them.
The more aggressive, combative style of the super-agents was not shared by all agents – indeed, some abhorred the practices of these new kids on the block. But slowly, almost imperceptibly, the culture of agenting began to change. Agenting became less and less about striking deals that kept everyone happy, and more and more about getting the best deal you could for your authors, even if it meant, on occasion, upsetting a publisher or editor with whom you had a long and amicable relationship. This didn’t mean that the size of the advance became the only basis for deciding which publisher to go with – there were always going to be other considerations, such as the nature of the publishing house, the relationship with the editor, the commitment in terms of marketing, etc. But money up front did matter, and increasingly so. Not only was it a means of livelihood for authors, many of whom wanted to live by their writing if they could, but it was also taken as a sign of the publisher’s commitment: the bigger the advance, the more the publisher would be willing to put behind the book in terms of the size of the print run, the marketing budget, the sales effort and so on. In a market where agents controlled access to the most prized new content, the size of the publisher’s advance became an increasingly important factor in deciding who would acquire the rights to a book. Advances escalated, auctions became more common, and eventually it was only the publishers with access to the deepest pockets – and increasingly that meant the pockets of large corporations – who were able to compete for the most sought-after works.
The third key development that shaped the field of Anglo-American trade publishing was the growth of the publishing corporations. From the early 1960s on, several waves of mergers and acquisitions swept through the world of Anglo-American trade publishing, and many formerly independent publishing houses – Simon & Schuster, Scribner, Harper, Random House, Alfred Knopf, Farrar, Straus & Giroux, Jonathan Cape, William Heinemann, Secker & Warburg, Weidenfeld & Nicolson, to name just a few – were transformed into imprints within large corporations. The reasons for these mergers and acquisitions were complex and they varied from case to case, depending on the circumstances of the houses that were being acquired and the strategies of the acquiring firms; but the overall result was that, by the late 1990s, the landscape of Anglo-American trade publishing had been dramatically reconfigured. In a field where there had once been dozens of independent publishing houses, each reflecting the idiosyncratic tastes and styles of their owners and editors, there were now just five or six large publishing corporations, each acting as an umbrella organization for numerous imprints and each owned, in turn, by a much larger multimedia conglomerate that stood behind it and to which the publishing corporation reported. Most of these conglomerates were large, diversified, transnational businesses with interests in many different industries and countries. Some, such as the German groups Bertelsmann and Holtzbrink, remained private and family-owned, while others, like Pearson, NewsCorp, Viacom and Lagardère, were publicly traded companies. In most cases, these conglomerates acquired trade publishing assets both in the US and in the UK, assembling them under a corporate umbrella that carried the same name – Penguin, Random House (now Penguin Random House following their merger in 2013), Simon & Schuster, HarperCollins, Hachette or Macmillan – even if in practice the operations in the US and in the UK operated largely independently of one another and reported directly to the parent company.
The large publishing corporations became major players in the field of Anglo-American trade publishing, together accounting for around half of total retail sales in the US and the UK by the early 2000s. In a field characterized by large retail chains and powerful agents who controlled access to customers and content respectively, there were clear advantages to being big. Scale gave them leverage in their negotiations with the large retail chains, where terms of trade could make a real difference to the profitability of the publisher. It also gave them access to the deep pockets of the large conglomerates, which greatly strengthened their hand when it came to competing for the most sought-after content, where, thanks in part to the growing power of agents, the size of the advance was often the decisive consideration. Smaller and medium-sized publishers simply couldn’t compete with the financial clout wielded by the new publishing corporations, and many eventually hauled up the white flag and joined one of the groups.
In broad terms, these were the three developments that shaped the field of Anglo-American trade publishing during the last four decades of the twentieth century, from roughly 1960 to the early 2000s. Of course, there were many other factors that were important in shaping this field, and many other organizations that were active and significant players in the world of trade publishing during this time: this is a world of bewildering complexity, full of arcane practices, highly ramified supply chains and countless organizations doing a myriad of different things. But if we wanted to understand why the world of Anglo-American trade publishing in the 1980s, 1990s and early 2000s was so different from the world of trade publishing that existed in the 1950s and before, and if we wanted to understand the most significant practices that had become prevalent and taken-for-granted in the industry by the early 2000s – including auctions for new books, mouth-wateringly high advances, stack-’em-high book displays in the major retail outlets, bestsellers on a scale and with a frequency that was unprecedented, high discounts and high returns – then the three developments outlined above would give us the keys.
It was in the context of an industry structured in this way that, from the early 1980s on, the digital revolution began to make its presence felt. Initially, this was a low-key affair, invisible to the outsider. Like so many other sectors of industry, the early impact of the digital revolution was in the area of logistics, supply-chain management and the gradual transformation of back-office systems. For an industry like book publishing, where thousands of new products – that is, books – are published every week, each bearing a unique numerical identifier or ISBN, the potential for achieving greater efficiencies in supply-chain management through the use of IT was enormous. Huge investments were made throughout the 1980s and 1990s to create more efficient systems for managing all aspects of the publishing supply chain, from production, rights and royalties to ordering, warehouse management, sales and fulfilment. Improved IT systems enabled publishers to manage the publishing process more efficiently, enabled wholesalers to offer much better services to retailers, and enabled retailers to monitor their stock levels and re-order on a daily basis in the light of computerized point-of-sale data. Behind the scenes, the entire book supply chain was being quietly but radically transformed. These were not the kinds of developments that would get blood racing through the veins, but it would be hard to overstate their significance for the day-to-day operations of the publishing industry.
Yet the digital revolution in publishing was never going to be only about the logistics of supply-chain management and the improvement of back-office systems, however important these things are in the day-to-day running of businesses. For the digital revolution had the potential to be far more disruptive than this. Why? What was it about the digital revolution that made it so much more disruptive, indeed threatening, than the many other technological innovations that had impinged on the publishing industry often enough in the course of its 500-year history?
What made the digital revolution unique is that it offered the possibility of a completely different way of handling the content that was at the heart of the publishing business. For, at the end of the day, publishing, like other sectors of the media and creative industries, is about symbolic content – that is, about a particular kind of information that takes the form of stories or other kinds of extended text. What the digital revolution made possible was the transformation of this information or symbolic content – indeed, any information or symbolic content – into sequences of digits (or streams of bits) that can be processed, stored and transmitted as data. Once information takes the form of digitized data, it can be easily manipulated, stored, combined with other data and transmitted using networks of various kinds. Now we’re in a new world, very different from the world of physical objects like cars, refrigerators and print-on-paper books. It is a world of weightless data that can be subjected to a whole new set of processes and transmitted via networks that have their own distinctive properties. And the more that publishing is drawn into this new world, the further it moves away from the old world of physical objects which had been its home since the time of Gutenberg. In short, the symbolic content of the book is no longer tied to the physical print-on-paper object in which it was traditionally embedded.
This, in essence, is why the digital revolution has such far-reaching consequences for the publishing industry and for other sectors of the media and creative industries: digitization enables symbolic content to be transformed into data and separated from the material medium or substratum in which it has been embedded hitherto. In this respect, publishing is very different from, say, the car industry: while the car industry can be (and has been) transformed in many ways by the application of digital technologies, the car itself will always be a physical object with an engine, wheels, doors, windows, etc., even if it no longer has a driver. Not so the book. The fact that, for more than 500 years, we have come to associate the book with a physical object made with ink, paper and glue is, in itself, a historical contingency, not a necessary feature of the book as such. The print-on-paper book is a material medium in which a specific kind of symbolic content – a story, for example – can be realized or embedded. But there were other media in the past (such as clay tablets and papyrus) and there could be other media in the future. And if the content can be codified digitally, then the need to embed that content in a particular material substratum like print-on-paper, in order to record, manipulate and transmit it, disappears. The content exists virtually as a code, a particular sequence of 0s and 1s.
But the digital revolution did much more than this: it transformed the whole information and communication environment of contemporary societies. By bringing together information technology, computers and telecommunications, the digital revolution enabled ever-increasing quantities of digitized information to be transmitted at enormous speeds, thereby creating new networks of communication and information flow on a scale that was unprecedented. The informational life-worlds of ordinary people were changing as never before. Soon they would be carrying around in their pocket or bag a small device that would function simultaneously as a phone, a map and a computer, enabling them to stay permanently connected to others, to pinpoint their location and get directions, and to access vast quantities of information at the touch of a screen. Traditional creative industries like publishing found themselves caught up in a vortex of change that deeply affected their businesses, but over which they had little or no control. This was a process that was being driven by others – by large technology companies based primarily on the West Coast of the US, far away from the traditional heartlands of Anglo-American trade publishing. These companies were governed by different principles and animated by an ethos that was alien to the traditional world of publishing, and yet their activities were creating a new kind of information environment to which the old world of publishing would have to adapt.
The area of book publishing where the disruptive impact of the digital revolution was first experienced was not in the sphere of consumption, however: it was in the sphere of production. The traditional methods of the publishing industry, whereby a manuscript was received from an author, usually in the form of a typescript, and then edited, copyedited and marked up for the typesetter, were swept aside as the entire production process was turned, step by step, into a digital workflow. Indeed, as more and more authors began to compose their texts by typing on the keys of a computer rather than using a pen and paper or a typewriter, the text became a digital file from the moment of creation – it was born digital, existing only as a sequence of 0s and 1s stored on a disc or in the memory of a computer. The material forms of writing were changing,2 and, from that point on, the transformation of the text that leads to the creation of the object that we call ‘the book’ could, at least in principle, be done entirely in digital form: it could be edited on screen, revised and corrected on screen, marked up for the typesetter on screen, designed and typeset on screen. From the viewpoint of the production process, the book was reconstituted as a digital file, a database. To a production manager in a publishing house, that’s all the book now is: a file of information that has been manipulated, coded and tagged in certain ways. The reconstitution of the book as a digital file is a crucial part of what I call ‘the hidden revolution’.3 By that, I mean a revolution not in the product but rather in the process: even if the final product looks the same as it always did, a physical book with ink printed on paper, the process by which this book is produced is now completely different.
While all these steps in the production process could in principle be done digitally, it was never so easy in practice. Digitization did not always simplify things – on the contrary, it often made them more complex. The digital world, with its plethora of file types and formats, programming languages, hardwares, softwares and constant upgrades, is in many ways more complicated than the old analogue world of print. A central part of the history of the publishing industry since the early 1980s has been the progressive application of the digital revolution to the various stages of book production. Typesetting was one of the first areas to be affected. The old linotype machines, which were the standard means of typesetting in the 1970s and before, were replaced in the 1980s by big IBM mainframe typesetting machines and then, in the 1990s, by desktop publishing. Typesetting costs plummeted: whereas, in the 1970s, it typically cost $10 a page to get a book typeset from manuscript, by 2000 it was costing between $4 and $5 a page, despite the decrease in the value of the dollar produced by two decades of inflation. While the shift was decisive and dramatic, it was a confusing time for those who lived through the changes and found themselves having to adapt to new ways of doing things. The job of the typesetter was redefined and lines of responsibility were blurred. Some of the tasks formerly carried out by typesetters were eliminated and others were thrown back on in-house production staff, who suddenly found themselves on the front line of the digital revolution in publishing, obliged to use new technologies and learn new programmes that were themselves constantly changing.
By the mid-1990s, many of the technical aspects of book production, including typesetting and page design, had been thoroughly transformed by the application of digital technologies. Progress was more erratic in other areas, such as editing and printing: here too there were aspects of the workflow that became increasingly digital in character, though in ways that were more complex than a one-way shift from analogue to digital. While many authors were composing texts on computers and hence creating digital files, their files were often too full of errors for publishers to use. It was often easier and cheaper for the publisher to print out the text, edit and mark-up the printed page, and then send the edited and marked-up manuscript to a compositor in Asia who would re-key the text and add the tags for the page layout. So while in principle the author’s keystrokes were the point at which the digital workflow could begin, in practice – at least in trade publishing – the digital workflow typically began at a later point, when the edited and marked-up manuscript was re-keyed by the compositor, who supplied the publisher with a file that included additional functionality.
Printing is another area where digitization had a huge impact, though again in ways that were more complex than a simple one-way shift from analogue to digital. Until the late 1990s, most publishers used traditional offset printing for all of their books. Offset has many advantages: print quality is high, illustrations can be reproduced to a high standard and there are significant economies of scale – the more you print, the lower the unit cost. But there are disadvantages too: most notably, there are significant set-up costs, so it is uneconomic to print small quantities. So backlist titles that were selling a few hundred copies or less per year were commonly put out of print by many publishers, and the large trade houses often drew the line much higher. It simply wasn’t economic for them to keep these books in print, taking up space in the warehouse and reprinting in small quantities if and when the stock ran out.
The advent of digital printing changed all that. The basic technology for digital printing had existed since the late 1970s, but it wasn’t until the 1990s that the technology was developed in ways that would enable it to become a serious alternative to the traditional offset presses. As reproduction quality improved and costs came down, a variety of new players entered the field, offering a range of digital printing services to publishers. It was now possible to keep a backlist title in print by sending the file to a digital printer who could reprint small quantities – 10, 20, 100 or 200 copies, far fewer than would have been possible using traditional offset methods. The unit costs were higher than they were with traditional offset printing but still manageable for the publisher, especially if they were willing to raise the retail price. It was even possible to turn the traditional publishing fulfilment model on its head: rather than printing a fixed quantity of books and putting them in a warehouse to wait for them to be ordered and sold, the publisher could give the file to a print-on-demand supplier like Lightning Source, who would hold the file on its server and print a copy of the book only when it received an order for it. In this way, the publisher could keep the book permanently available without having to hold stock in a warehouse: physical stock was replaced by a ‘virtual warehouse’.
By the early 2000s, many publishers in the English-speaking world were using some version of digital printing for their slower-moving backlist titles, whether short-run digital printing or true print-on-demand. Those in the fields of academic and professional publishing were among the first to take advantage of these new opportunities: many of their books were specialized works that sold in small quantities at high prices, and were therefore well suited to digital printing. Many trade publishers were accustomed to dealing in the larger print quantities for which offset printing is ideal, but they too came to realize – in some cases spurred on by the long-tail thesis first put forward by Chris Anderson in 20044 – that there was value locked up in some older backlist titles that could be captured by using digital print technology. Publishers – academic, professional and trade – began to mine their backlists, looking for older titles for which they still held the copyright, scanning them, turning them into PDFs and re-releasing them as digitally printed books. Titles that had been put out of print many years ago found themselves being brought back to life. Thanks to digital printing, publishers no longer had to put books out of print at all: they could simply reprint in small quantities or put the file in a print-on-demand programme, thereby keeping the title available in perpetuity. This was one of the first great ironies of the digital revolution in publishing: far from killing off the printed book, the digital revolution gave it a new lease of life, enabling it to live well beyond the age at which it would have died in the pre-digital world. From now on, many books would never go out of print.
These developments in print technology, together with the substantial reduction in costs associated with the digitization of typesetting and book design, also greatly lowered the barriers to entry and opened the way for new start-ups to enter the publishing field. It was now easier than ever to set up a publishing company, typeset and design a book using desktop publishing software on a PC or a Mac, and print in small quantities – or even one at a time – using a digital printer or print-on-demand service. The digital revolution spawned a proliferation of small publishing operations. It also opened the way for an explosion in self-publishing – a process that began in earnest in the late 1990s and early 2000s with the appearance of a variety of organizations using print-on-demand technology, but took on a new character from around 2010, when a plethora of new players entered the self-publishing field.
While these developments were dramatic in their own way, they were only the first stages in a process of transformation that would soon prove to be far more challenging for the established structures and players of Anglo-American trade publishing. With the rise of the internet in the 1990s, the weaving together of information and communication technologies and the growing availability of personal computers and mobile devices with high-speed internet connections, it became possible not just to transform supply chains, back-office systems and production processes, but also to revolutionize the ways in which customers, i.e. readers, acquire books, the form in which they acquire them and, indeed, the ways in which the readers of books relate to those who write them. The traditional print-on-paper book, and the industry that had grown up over a period of some 500 years to produce this object and distribute it to readers through a network of retail outlets, constituted, in effect, a channel of communication that put one set of individuals (writers) in communication with another set of individuals (readers) through a particular medium (the book) and a ramified network of organizations and intermediaries (publishers, printers, wholesalers, retailers, libraries, etc.) which made this communication process possible. The great challenge posed by the digital revolution to creative industries like publishing is that it opened up the possibility of creating entirely new channels of communication between creators and consumers that would bypass the intermediaries that had hitherto enabled this process to take place. Traditional players could be ‘disintermediated’ – that is, cut out of the supply chain altogether.
Perhaps the most dramatic demonstration of the disruptive potential of this aspect of the digital revolution was provided by the music industry. For decades, the music industry, dominated by a small number of major record labels, had been based on an economic model in which recorded music was inscribed in a physical medium, traditionally the vinyl LP, and sold through a network of retail outlets. The first major impact of the digital revolution on the music industry – the development of the CD in the 1980s – did not fundamentally disrupt this model: on the contrary, it simply substituted one physical medium for another and resulted in a surge in sales as consumers replaced their LPs and cassette tapes with CDs. But the development of the MP3 format in 1996, and the coming together in the late 1990s and early 2000s of personal computers and the internet, resulted in a sudden and dramatic change in the way that music was acquired, shared and consumed. Very quickly, the world of recorded music changed from one in which consumers bought albums in bricks-and-mortar stores, occasionally sharing them with friends, to a world in which music could be downloaded, uploaded and shared online, potentially with anyone who had access to the internet.
The explosive implications of this transformation were highlighted most vividly by Napster, the peer-to-peer (P2P) file-sharing service that was launched in 1999. The site catalogued the music files of millions of users so you could see who had what, and then enabled you to download a file from a remote PC, seamlessly and with no money changing hands. Napster grew exponentially – at its peak, it had 80 million registered users worldwide. As music sales began to decline, the music companies and the Recording Industry Association of America (RIAA) sued Napster for infringement of copyright, succeeding in closing it down in 2001. But the genie was out of the bottle and the short life of Napster brought home to everyone the massive disruptive potential of online distribution. A plethora of P2P file-sharing services flourished in the wake of Napster’s demise, many using the BitTorrent protocol that gathers bits of a file from a variety of hosts rather than downloading a file from a single server, making it much harder to shut down.
Quite apart from P2P file sharing, legitimate channels for the online distribution of music grew rapidly in the early 2000s. Apple, the most significant of these, launched the iTunes music player in 2001, the same year that it released the iPod MP3 player, and added the iTunes music store in 2003. Now it was possible for customers to download songs, perfectly legally, for 99¢ a track. By 2008, Apple had become the number one music retailer in the US, outstripping Walmart, Best Buy and Target. During the same period, the sales of CDs in the US collapsed, from 938 million units in 1999 to 296 million in 2009, less than a third of what they had been a decade earlier.5 Total revenues from US recorded music sales also plummeted, falling from $14.6 billion in 1999 to $7.8 billion in 2009.6 The collapse of revenues was cataclysmic, as can be seen in figure 0.1.
Figure 0.1 US recorded music revenues by format, 1998–2010
Note: revenues are for cassettes, CDs and downloads (singles and albums) only.
Source: The Recording Industry Association of America (RIAA)
Consumers who were still buying music were paying much less for it than they had paid in the late 1990s, when CDs were the overwhelmingly dominant format. In 1999, 938 million CD sales generated revenue of $12.8 billion, or $13.66 per CD; there were no download sales at that time. By 2009, CD sales had fallen to 296 million units; these were still generating revenue of $14.58 per CD but, because the units sold were less than a third of what they had been a decade earlier, the total revenue generated from CD sales had fallen to $4.3 billion. By contrast, music downloads had grown dramatically since 2004, and by 2009 there were 1,124 million single downloads and 74 million album downloads; taken together, however, these downloads generated only another $1.9 billion, and therefore came nowhere near to making up for the loss of $8.5 billion of revenue on CD sales.7 Moreover, while many people were paying for downloads through legitimate channels like iTunes, a very large but unknowable number of others were downloading music for free – according to one estimate by the online download tracker BigChampagne, the volume of unauthorized downloads still represented around 90 per cent of the music market in 2010.8
There were many in the book publishing industry who were looking over their shoulders at the tumultuous developments in the music industry and wondering anxiously if music was the future of books foretold. What would the book publishing industry look like if piracy became rife and total book revenues were cut in half? What kind of revenue models would replace the tried-and-tested model on which the industry had been based for more than 500 years, and how robust would these new models be? How could the book industry protect itself from the rampant file sharing that had become commonplace in the world of music? What would happen to bookstores if more and more books were downloaded as files, or even ordered online rather than bought in bricks-and-mortar bookstores – how could physical bookstores survive? And if they disappeared, or even declined significantly, how would readers discover new books? It didn’t take too much imagination to see that the book publishing industry could be hit just as hard by the tsunami that swept through the music industry in the late 1990s and early 2000s. Senior managers looking out of the windows of their high-rise office blocks in Manhattan might well be wondering whether the days of their panoramic views were numbered.
And yet, during the first few years of the new millennium, the signs of what would actually happen in the book publishing industry were anything but clear. There was no shortage of speculation in the late 1990s and early 2000s about the impending ebook revolution – one much-cited report by PricewaterhouseCoopers in 2000 forecast an explosion of consumer spending on electronic books, estimating that by 2004 consumer spending on electronic books would reach $5.4 billion and would comprise 17 per cent of the market. Expectations were also raised by the startling success of one of Stephen King’s early experiments with digital publishing. In March 2000, he published his 66-page novella Riding the Bullet electronically, available only as a digital file that could be downloaded for $2.50; there was an overwhelming response, resulting in around 400,000 downloads in the first twenty-four hours, and 600,000 in the first two weeks. But, notwithstanding Stephen King’s good fortune, the predictions made by PricewaterhouseCoopers and others turned out to be wildly optimistic, at least in terms of the timescale. Those publishers who were actively experimenting with ebooks in the early 2000s invariably found that the levels of uptake were extremely low, indeed negligible: sales of individual ebooks numbered in the tens, in some cases, the hundreds, but were nowhere near the hundreds of thousands, let alone millions, of units that many had expected. Whatever was happening here, it didn’t seem to bear much resemblance to the sudden and dramatic transformation of the music industry – or at least not yet.
The story of the ebook’s rise turned out to be much more complicated than most commentators had thought, and as this story unfolded through the first decade of the twenty-first century and into the second, countless predictions, uttered a few years earlier with great conviction, turned out to be wide of the mark. Very few people accurately anticipated what actually happened, and, at every stage in this unfolding story, future developments were always unclear. The truth is that no one really knew what would happen, and for years everyone in the publishing industry was living in a state of deep uncertainty, as if they were moving towards a cliff but never knew whether they would ever reach the edge and what would happen if they did. For some within the publishing industry and many on the fringes of it, ebooks were a revolutionary new technology that would finally drag the publishing world, with its arcane practices and inefficient systems, into the twenty-first century. For others, they were the harbinger of doom, the death-knell of an industry that had flourished for half a millennium and contributed more to our culture than any other. In practice, they were neither, and champions and critics alike would be dumbfounded by the curious course of the ebook.