Читать книгу Regulating Platforms - Terry Flew - Страница 21

The Changing Internet Landscape

Оглавление

From the perspective of the 2020s, the early years of the internet appeared to be marked by a relative absence of regulation. There were certainly restrictions placed upon users’ access to online content, the most common points of intervention being around the internet’s filtering of content that could be deemed to be pornographic, to qualify as hate speech, to promote terrorism, or to infringe copyright (Suzor, 2019; Zittrain, 2002; Zittrain and Palfrey, 2007). But attempts to regulate access to online content typically experienced a significant pushback, at least in liberal democratic societies with a strong civil society. For example, attempts by the Australian federal government to establish mandatory filtering for internet service providers in the years before 2010 were successfully resisted by a diverse coalition of industry and civil society organizations, although the constitutional basis for such restrictions already existed in Australian media and communications law (Australian Law Reform Commission, 2012; Moses, 2010).

Over the course of the 2010s, there was a significant shift in public sentiment towards the regulation of online content. There was growing concern about the role played by digital platforms in the distribution of online content and about how the relationships between content distributors and users were mediated through such platforms. There was the role played by what Ananny and Gillespie have termed ‘public shocks’, that is, online public events that ‘suddenly highlight a platform’s infrastructural qualities and call it to account for its public implications’ (Ananny and Gillespie, 2017, p. 2). There have been many examples of such public shocks; they include the livestreaming of murders, of sexual assaults, of acts of violence, and, in March 2019, of the Christchurch mosque atrocity, in which an Australian-born terrorist murdered fifty people in two mosques in Christchurch, New Zealand (this was streamed on Facebook Live). A variety of public scandals involving the misuse of personal data have also plagued the largest platform businesses, most notably Facebook, which saw the Cambridge Analytica scandal in 2018: the data of up to 87 million Facebook users were harvested by political campaigns such as the Brexit referendum in 2016 or Donald Trump’s US presidential campaign in the same year.

The surprise election of Trump in 2016 also drew attention to the pervasiveness of fake news on social media platforms, the potential for electoral manipulation by politically motivated actors, and digital platforms’ lack of accountability for news content accessed from their sites (Allcott and Gentzkow, 2017; Benkler et al., 2018; Caplan, 2017; Flew, 2019). In a lively statement of the societal problems presented by the dominant digital platforms, the actor and comedian Sasha Baron Cohen, in his address to the Anti-Defamation League, described these platforms as ‘the greatest propaganda machine in history’:

Facebook, YouTube and Google, Twitter and others – they reach billions of people. The algorithms these platforms depend on deliberately amplify the type of content that keeps users engaged – stories that appeal to our baser instincts and that trigger outrage and fear … We have lost, it seems, a shared sense of the basic facts upon which democracy depends. (S. B. Cohen, 2019)

These developments added up to a ‘techlash’, to use a term coined anonymously in the Economist (2017). There were a range of public enquiries into the power of digital platforms, adverse findings related to companies such as Google and Facebook, and growing calls by legislators, regulators, and academics for the break-up of digital platforms (Warren, 2019). Clearly, public shocks such as the livestreaming of the Christchurch mosque atrocity on Facebook Live prompted community demands to address the power of technology giants and to attribute social responsibility for the content carried by digital platforms. The Cambridge Analytica scandal was also a catalyst for change, particularly as it revealed the limitations of the recurring cycle of regulation by public apology and vague calls for more focused self-regulation that has long characterized the responses of CEOs of digital platforms to data breaches and privacy-related scandals (Flew, 2018a, 2018b, 2018c). But the demands for more regulation of the internet and the ‘policy turn’ in debates surrounding internet governance had roots in the transformations that had occurred in the digital environment over the course of the 2010s.

Looking beyond the immediate issues that have underpinned the techlash, we can identify five factors behind these wider structural changes. These are:

1 (1) the changing political economy of the internet, particularly around the rise of platform monopolies and oligopolies;

2 (2) the platformization of the internet, which shifts debates around governance from whether the internet is or should be governed to how it is governed and who makes the relevant decisions;

3 (3) the degree to which concerns about the ‘mass’ nature of digital media and communication have come to the forefront of contemporary debates about the internet and digital culture;

4 (4) the paradoxical relationship of populist politics to digital platforms, whereby platforms function as a primary means of reaching potential supporters outside traditional mass media channels, while at the same time fomenting opposition to ‘tech elites’ as part of a wider anti-elitist politics;

5 (5) new debates about the role of regulators and the return of regulatory activism, after a long period during which nation-state regulatory agencies were seen as being less significant than corporate self-regulation.

Regulating Platforms

Подняться наверх