Читать книгу Regulating Platforms - Terry Flew - Страница 9

Preface

Оглавление

The primary aim of Regulating Platforms is twofold. First, the book aims to provide an overview of the issues that are currently arising, as governments throughout the world address the social, economic, political, and cultural implications of digital platforms and their power to shape online interactions at a time when most of the world’s population relies on the internet more than ever before. The book asks practical questions such as how to define platforms and delineate their different types, what is the mixture of issues of concern about the power of digital platforms, and what can be learnt from the initiatives that both state and non-state actors – including the digital platform companies themselves and the third-party regulators they have summoned into existence – have thrown at the growing array of policymakers, politicians, regulators, corporate advisors, academics, and activists engaged with these issues.

The book also has a second, more normative focus. It asks the question: why now? After a period of over two decades of broad consensus, at least in the western capitalist world, that the best approach to the internet that policymakers could take was to do very little, why did internet governance and regulation surge onto the global agenda in the mid-2010s, and why has it remained there ever since? We are coming to the end of a long period of ‘soft globalism’ and polycentric governance of the internet at the international level, a period during which the prevailing view was that the best decisions were those made by ‘rough consensus’ in multistakeholder forums where governments were a relatively minor player. Why did we see the resurgence of tech nationalism? Why did governments start to ban the platforms of other countries, triggering concerns about a global ‘splinternet’?

It is not hard to see when the sea change in attitudes towards the regulation of online environments happened. In the United States, it can be located in the transition from the Obama administration to the Trump presidency and in the range of concerns that the 2016 presidential election raised, from the circulation of fake news on social media platforms to allegations of electoral interference by foreign powers. The European Union chose to act on widespread concerns about the misuse of personal data online; the General Data Protection Regulation (GDPR) was enacted in 2016 and came into law in 2018, setting rules about how digital platforms and other online entities could make use of material provided by online ‘data subjects’. The GDPR demonstrated that the online environment, long held to constitute a realm beyond the territorial sovereignty and policy knowledge of governments, could in principle be regulated, and that digital tech giants would respond appropriately to the regulation of their activities by sovereign political entities. The GDPR preceded the Cambridge Analytica scandal – that is, the revelations of whistleblower Christopher Wylie to the Guardian’s journalist Carole Cadwalladr, in 2018, about how data gathered through Facebook were onsold to political campaigns such as the Vote Leave group in the 2016 Brexit referendum in the United Kingdom and the Trump campaign in the United States. Even so, the scandal threw into very sharp relief a range of concerns that had been simmering about the power of digital platforms and the possibilities of misuing it. Discussion of a ‘global techlash’ and the rise of the FAANG or FAMGA – acronyms for Facebook, Apple, Amazon, Netflix, and Google or, in a different version, Facebook, Apple, Microsoft, Google, and Amazon – became commonplace.

The argument of this book is that the push for greater internet regulation is integrally related to the platformization of the internet – that is, the process through which online interactions and engagements are increasingly made to take place on a relatively small number of digital platforms. Moreover, as the platform business model enables significant competitive advantages to accrue to dominant companies through first-mover advantages, lock-ins, and network effects, the companies that own these online spaces acquire monopolistic and oligopolistic economic power. Such derives from access to ever-growing bodies of user data that allow for behavioural targeting across multisided markets. This power extends not only over consumers but over other businesses as well, both through dominance of digital advertising and through the terms of trade that these giants can impose on digital content providers in the news media, entertainment, and other creative industries. With such power, however, comes considerable social responsibility, as the dominant companies increasingly perform a gatekeeping function over digital communications and play an outsized role in political processes around the world. Hence it becomes imperative that they set guidelines and moderate online speech across issues such as hate speech and online abuse, or disinformation and fake news. But these companies were strongly imbued with the Silicon Valley ethos of maximizing free speech rights and user engagement: this was both their business model and their underlying philosophy. In consequence, they have frequently been uncomfortable with managing online interactions in ways that satisfy the concerns of citizens, politicians, other stakeholders in their businesses, and the public interest.

A historical typology informs the book: in this typology, the evolution of the internet unfolds in three stages. The first stage, which can be broadly dated roughly from 1990 to 2005, is that of the open internet or libertarian internet, as we may call it. It was strongly infused by the Californian ideology – which is summed up in the slogan ‘free minds and free markets’ – as well as by deregulatory economics and countercultural idealism. The idea was that governments can and should be largely kept at bay in matters of regulating the internet, on the grounds that the spontaneous ordering introduced by global netizens would promote a liberal order underpinned by continuous waves of technological innovation.

The second stage, from 2006 to the present, is that of the platformized internet. The rise of Web 2.0 brought together two significant insights: most internet users preferred environments that were managed and curated by others; and such environments enabled online interaction by simplifying processes of accessing content or using devices. As part of this welcome simplification, every online interaction produced a data trail that was potentially open to providing useful insights, which could in turn inform further economic transactions. This was the era of gestation of big tech and the companies that dominate digital communications today. Critics came to label it ‘digital capitalism’, ‘platform capitalism’, and ‘surveillance capitalism’. It has seen growing demands for antitrust action against the tech giants, calls for greater regulation of online activity in order to reduce social harms, and concerns that the digital sorting and reshaping of communities could promote political polarization and a ‘post-truth’ society.

The argument of this book is that we are now entering into a third phase of the internet’s evolution, namely that of the regulated internet. One of the questions I raise concerns the relationship between platform regulation and platform governance. One version is that regulation characterized twentieth-century communications and media policy, and especially nation-state agencies, whereas governance is a broader term that encompasses multiple stakeholders, policy innovation, and approaches derived from behavioural economics and nudge theories. Drawing upon six case studies that have operated at national, regional, and global levels, I critique this argument, proposing instead that the distinction is not so much between regulation and governance as it is between regulations that are applied by external agencies and have some form of negative sanction attached to breaking laws, and regulations that largely work upon implicit understandings of appropriate platform conduct and the promise of better corporate behaviour. The field is rendered more complex by the fact that governance is an inherent feature of platforms themselves, as they manage multiple stakeholders in diverse market environments that have high levels of public visibility around their decisions. This book argues (1) that public opinion and the role played by governments that seek to represent it push towards greater external regulation of digital platforms, and (2) that nation states are becoming increasingly important actors in shaping online environments. At the same time, many regulatory models are hybrids of nation-state regulation, co-regulation, and self-regulation. This is a space with high levels of innovation when it comes to types of regulatory approaches (the Facebook Oversight Board is a recent example) and with higher levels of civil society engagement, public interest, and media reportage than found in other industry sectors.

These developments are conceptualized in the book around the proposition that the development of digital technologies generally and of digital platforms specifically can be framed as arising at the intersection of ideas, interests, and institutions. Ideas refers to the dominant ways of thinking about material objects and relationships at any given time, but also to the ideas that challenge and compete with those dominant ways of thinking or ‘mental maps’, as they are sometimes called (Denzau and North, 1994). Interests consist of those entities that seek to advance their own power individually or collectively, in the economic, political, and cultural spheres. Digital platform companies themselves are clearly ‘interests’ in this special usage, and so are other businesses that have relationships with them (news media, entertainment, advertising, etc.), as well as organizations that represent conflicting or competing interests: trade unions, advocacy groups, consumer organizations, and non-government organizations (NGOs) generally. Institutions are those organizational arenas that have responsibility for governing and regulating digital platforms for particular outcomes. Through them the interplay of ideas and interests is played out and collective decision-making occurs – at local, national, regional and supranational levels.

What we see from this angle is a mismatch between the rise of digital platform companies as dominant players in the global economy and de facto gatekeepers of digital interactions on the one hand, and the ideas and institutions that underpin platform regulation on the other. Many of the ideas that inform this space and the institutions established for its governance remain tied to the decentralized world of the open internet on which they were premised. In this world, nation states should not govern the digital realm because no one needs to govern it. That was the ideal of spontaneous ordering promised by the libertarian internet. As a result, we come across increasing numbers of instances where nation states that attempt to regulate competition, content, data, and other aspects of the digital environment find their legitimacy in doing so repeatedly challenged, both by the interested companies themselves, which tend to operate globally rather than nationally, and by civil society organizations. At the heart of debates along this line is the question whether those who interact with digital platforms are best understood as national citizens or as global netizens.

I owe a key conceptual debt to the work of Michel Foucault. This book is not a Foucauldian analysis of the internet and digital platforms. It does, however, pick up on two key insights from Foucault. The first concerns the nature of power. I argue here that platform power exists, but does not operate primarily through the ability of digital companies to make people do things that they would not otherwise do. In that respect, my view here differs from the critique offered in the 2020 documentary The Social Dilemma (Orlowski, 2020), which draws a direct link between behavioural targeting through algorithmic manipulations of user data and the turn to online filter bubbles and political extremism. This could be described as evidence of akrasia – the kind of weakness that makes one act against one’s better judgement.

While the algorithmic manipulation of users through access to online data about them is possible – this is the basis of the Cambridge Analytica scandal – the argument of this book is that a comprehensive treatment of digital platform power needs to focus on the capacity of major platforms to shape the economic, political, and communications environments in which they operate. They can shape digital markets, political processes, and the online public sphere. This capacity may or may not be exercised, but it demonstrably exerts a strong influence on other players in the environment, from media companies to political activists and from politicians and political parties to regulators and governments. When, in February 2021, Facebook withdrew the access of Australian news media sites to its global news feed, as part of a bargaining strategy designed to influence the federal government’s proposed News Media and Digital Platforms Mandatory Bargaining Code, it made explicit forms of power that had long been tacit in the media environment. Similarly, the whole debate as to whether platforms such as Facebook, Twitter, and YouTube would act upon false claims and misinformation emanating from Donald Trump and his supporters drew attention to the amount of power of this sort they held within their organizations: power not framed by constitutions, laws, or legislators but contained by their own terms of service as interpreted by themselves. This is a form of power quite different from that of big corporations, as it is sui generis power, which constitutes a genuine challenge to other kinds of political authority. This is a challenge that, as many commentators have noted, is unprecedented among media industries but has a historical analogue (and to that extent a precedent) in the rise of the giant industrial trusts of the early twentieth century. Interestingly, the populist challenge to the power of big tech has played out particularly strongly in the United States, where it is one of the very few policy issues that can cross the Republican–Democrat partisan divide.

The responses to this concentrated economic, political, and communications power have been many and varied. A recurrent issue in these debates concerns the global nature of digital platforms and whether nation states have the inclination or the capacity to constitute forms of countervailing regulatory power. It is also the case that there are different national trajectories that have shaped the evolution of the internet in different parts of the world, ranging from the Californian ideology of the early Silicon Valley culture (Barbrook and Cameron, 1996) to the authoritarian statism and techno-nationalism that have shaped the Chinese internet. At the same time, not accepting this binary opposition, leaders such as Emanuel Macron called for a ‘third way’ of regulating the internet (Macron, 2018). The regulatory activism of the European Union shows us the gist of these initiatives in policies such as the GDPR and the proposed Digital Services Act. But this move has caused concerns about the rise of a global ‘splinternet’ (Lemley, 2021), as different national and regional models of internet governance develop institutional path dependence and the relatively weak and fragmented institutions of global internet governance show little capacity to broker a new framework for shared global governance in an era when nation states are gaining ascendancy.

There are strong reasons to believe that the capacity of nation states to regulate global digital platforms has been systematically underestimated; and the idea that state regulation is inherently impossible is not an empirical reality as much as an ideology that serves dominant interests. One of the important consequences of the platformization of the internet is that it has revealed the extent to which content on digital platforms is already moderated, curated, managed, and governed in various ways. This discovery has shifted the focus from whether online content can be regulated to who should regulate it and what forms of accountability and transparency should be set in place for content moderation decisions. Moreover, the demand to use antitrust laws to ‘break up big tech’ (Warren, 2019) can be seen as being as much about promoting competitive markets as it is about regulating digital capitalism. Indeed, some of the most vocal supporters of antitrust measures are also strong champions of free market capitalism and argue that information monopolies are stifling economic growth and innovation (Stigler Center for the Study of the Economy and the State, 2019). This has prompted critics on the left to argue that antitrust laws do not go far enough in breaking up the architecture of surveillance capitalism and data colonialism (Couldry and Mejias, 2019; Deibert, 2020).

The book concludes with a discussion of the practicalities of platform regulation and of some wider political issues that arise from the turn to a ‘legitimacy’ discourse – that is, one where the stress is on who makes decisions on what basis and whether the private and public actors can be trusted by the citizenry (Bowers and Zittrain, 2020). There are differences between policies and regulations that aim to enhance competition in digital markets and policies and regulations that aim to address online harms and online content. A series of substantive regulatory questions arises. One can ask whether the focus is on illegal or potentially harmful content (and who decides what is ‘potentially harmful’); how well these regulations sit within a revised communications and media policy programme; whether platforms begin to resemble publishers in legal terms; to what extent regulations apply primarily to what the European Union now calls ‘very large online platforms’ (VLOPs); and the issue of proportionality in regulatory burden.

I argue in the Conclusion that platform regulation can been seen, not as the state imposing its will upon digital netizens, but rather as a series of steps to democratize decision-making about digital platforms and digital futures. There are of course inherent risks: governments can overreach in their attempt to control the information for their own ends; alternatively, regulation may end up taking a largely symbolic form – appearing to address problems when in reality it has no ‘teeth’. Like many issues in the policy domain today, the politics of platform regulation does not align neatly with a left–right political split. Conservatives grapple with the division between a pro-market, pro-globalization wing and a more populist and nationalist wing, which is more likely to attempt to regulate digital platforms, whereas the left is divided into a globally minded cosmopolitan wing, which looks upon the state as a threat to freedom of speech, and a ‘democratic nationalist’ wing – so labelled by Brian Loader (2021) – which wants to make corporate power more accountable at home, in order to address the concerns of a disenfranchised citizenry. It must be said that, although digital platform regulation presents many complexities and challenges, these are not inherently greater than those associated with other industries that deal with intangible global commodities, for instance banking and finance. Part of the issue is around re-establishing public confidence in the regulatory state and in ideas about the public interest, at a time when digital platforms are increasingly promoting themselves as representing the polity more effectively than do its elected representatives.

Regulating Platforms

Подняться наверх