Читать книгу Instagram - Tama Leaver, Crystal Abidin - Страница 17

Communities, Boundaries and Content Moderation

Оглавление

The notion of the Instagram Community as a singular group, or even vaguely meaningful collection of people beyond the basic fact that these are all people who have chosen to download and create an account on Instagram, stretches the idea of community further than is meaningful. The use of the term community is, however, not accidental, and instead is part of a broader strategy to name and, to some extent, control or constrain the behaviour of Instagram users (Gillespie 2018). Perhaps the most notable place where the Instagram community is officially named into being is in the Community Guidelines which outline what was is, and what is not, permitted on the platform. In the short, summary version, of these guidelines, Instagram sums up their position thus: ‘We want Instagram to continue to be an authentic and safe place for inspiration and expression. Help us foster this community. Post only your own photos and videos and always follow the law. Respect everyone on Instagram, don’t spam people or post nudity’ (Instagram 2018b). There are far more details after that short summary, but in essence the description reminds users that Instagram is a policed platform, and there are rules to follow. Internet researcher Tarleton Gillespie describes the Instagram guidelines as a positioning of the platform as a community keeper, meaning that the Instagram community is spoken about as a ‘fragile community, one that must be guarded so as to survive’ (Gillespie 2018, p. 49). This rationale leads to a number of different rules and forms of content moderation, both algorithmically automated – where algorithms detect and delete the most obvious forms of nonpermitted content such as explicit pornography – and manual, where human moderators view and judge content explicitly reported by other Instagram users.

Initially the Instagram Community Guidelines explicitly banned all nudity, regardless of context, including any display of female nipples. However, after a number of Instagram users publicly reported their accounts being shut down for showing breast-feeding photos, Instagram eventually responded to strong community sentiment that this activity should not be positioned as sexual or worthy of banning (Grossman 2015). The community outcry over the hypocrisy of removing, amongst others, many images of breastfeeding mothers, saw Instagram and Facebook revise their guidelines to create exceptions to this rule (Locatelli 2017). As such, Instagram’s updated Community Guidelines show a more contextually aware approach to the visibility of nipples on the platform:

We know that there are times when people might want to share nude images that are artistic or creative in nature, but for a variety of reasons, we don’t allow nudity on Instagram. This includes photos, videos, and some digitally-created content that show sexual intercourse, genitals, and close-ups of fully-nude buttocks. It also includes some photos of female nipples, but photos of post-mastectomy scarring and women actively breastfeeding are allowed. Nudity in photos of paintings and sculptures is OK, too. (Instagram 2018b)

This tempering of the guidelines serves as a reminder that guidelines are constantly being revised, as are the ways that human moderators judge flagged content, and this process can often be perceived as responding to potential bad publicity (such as the media outcry around breastfeeding photos being removed) rather than necessarily internally driven reviews of what the singular, imagined Instagram community actually want (Gillespie 2018). The constant revision of the Community Guidelines also reflects the potential arbitrariness of the moderation process, in terms of the reporting options available to users, the feedback they are given after a report, and questions about the consistency of moderation decisions (Witt, Suzor & Huggins 2019).

Instagram, like many social media platforms, has also had real difficulty in managing content which valorizes and promotes eating disorders, which we refer to collectively as ‘pro-ED’, but is usually referred to by those who post and search for this material as ‘pro-ana’ (promoting anorexia) content (Gerrard 2018). The line between pro-ED material and more socially acceptable depictions of, and aspirations for, thinness, are blurred at best. As gender and media researcher Gemma Cobb (2017) argues, on many platforms pro-ED material is deliberately disguised as health motivation posts, aspirational (healthy) weight images or something else which is – in terms of culture promoted by the platform – socially acceptable. For several years, Instagram’s Community Guidelines explicitly banned eating disorder accounts and content, stating they would remove ‘any account found encouraging or urging users to embrace anorexia, bulimia, or other eating disorders’ (quoted in Cobb 2017). While that absolute ban has been lifted, Instagram now provides warnings and resources rather than erasing all pro-ED material. Building on advice from health professionals, a new sub-section of Instagram’s Help Centre called ‘About Eating Disorders’ (Instagram 2018a) provides suggestions on how to engage with people with eating disorders, and refers to explicit resources and services that can provide support.

While Instagram employs a mix of algorithmic filtering, hashtag bans (some permanent, some temporary), account removal and limiting content from search and explore, the policing of hashtags has received the most attention, but is also the easiest to do since this is where content has been explicitly labelled by the user posting it. Yet hashtag banning is far from perfect; when thin inspiration tags #thinspo and #thinspiration were blocked, for example, thinly veiled alternative spellings would quickly emerge such as #thynspiration or #thinspoooo (Cobb 2017). Also, when images are ambiguous (possibly about eating disorders, possibly not), often users will use many hashtags, disrupting the potential of a hashtag to clearly provide context and situate a post. So, a post might include #diet, #healthy, #gymlife and many other tags before also including #thinspo and #bonespo, confusing easy (and automated) banning and classification of images (Cobb 2017, p. 109). At the time of writing this chapter, for example, #bonespo returned a health warning before directing users to (a) Get Support, (b) See Posts Anyway or (c) cancel the search (see figure 1.2). And while #bonespo returns the health warning screen, the tag #bonespoo (with one extra o), which was suggested when searching for #bonespo, does not have a warning screen. However, top #bonespoo posts clearly include pro-ED content from accounts which have in their description the request ‘Don’t report, just block’, showing these users have an active awareness of Instagram’s policing of these images, and are trying to circumvent being deleted. Similarly, while hashtags for pro-ED are being banned and policed, other Instagram mechanisms actively achieve the opposite, using alternate signals to effectively assist users looking for pro-ED content (Gerrard 2018; Olszanowski 2014).

While Instagram has always publicly been a family friendly environment, the platform has a long history of being used to distribute all sorts of pornography and other adult material (Shah 2015). While many posts have been removed, accounts closed and hashtags banned (some permanently, others temporarily), Instagram has never been able to completely remove adult material from the platform. Indeed, the eggplant emoji hashtag # has the dubious honour of being the first known emoji hashtag to be (temporarily) banned from Instagram searches, in large part due to the use of the hashtag to playfully indicate male genitals (Highfield 2018). In many ways, the increased privacy that direct messages and self-deleting stories offer may also provide new avenues for the sharing of nudity, pornography and other material that is simply not publicly visible and thus never flagged by users who see this as problematic. Instagram’s algorithms similarly appear to do a much better job at automatically detecting nudity and pornography and removing it than other banned content (either directly, or triggering review by moderators) but even then the algorithms can be foiled by low resolution images and other visual elements that distort the digital footprint of nudity to fool an algorithm, but which is still clearly and obviously nudity to (human) viewers.

Figure 1.2. Screenshot of health warning returned with Instagram #bonespo search

The development of communities also leads to norms within these groups, ideas of acceptable practice and conduct, which might work against what other users view as appropriate, or even against Instagram’s own guidelines. As Tarleton Gillespie (2018, p. 197) argues, the way content is policed explains a lot about the power and politics behind a platform:

Content moderation is such a complex sociotechnical undertaking that, all things considered, it’s amazing that it works at all, and as well as it does. Even so, as a society we have once again handed over to private companies the power to set and enforce the boundaries of appropriate public speech for us. That is enormous cultural power held by a few deeply invested stakeholders, and it is being done behind closed doors, making it difficult for anyone else to inspect or challenge.

Instagram’s insistence on describing their more than a billion users as a singular community, with one set of rules to govern that entire community, means that the diversity of users and specific contexts of communication and content sharing are often lost. While chapter 5 outlines some of the very diverse communities that flourish and connect on Instagram, the boundaries established by Instagram’s content moderation and removal policies mean that other groups and other communities have either dived deeper into Instagram, beyond hashtags and public accounts, or have moved to other visual social media platforms altogether.

Instagram

Подняться наверх