Читать книгу Tumblr - Tama Leaver, Crystal Abidin - Страница 25

Moderating content

Оглавление

Most social media platforms prohibit or limit representations of sex, pornography, violence, obscenity, self-harm, and illegal activities, and posting content that functions as hate speech and harassment (Gillespie 2018). Of course, how stringently different platforms police the adherence to this list varies quite a bit. In 2012, tumblr staff posted plans for revising their Content Policy “against self-harm blogs” and proposed the removal of “active promotion of self-harm,” including content that glorified or recommended self-injury, suicide, or eating disorder techniques. Users were invited to provide feedback on the policy change and the response was intense, immediate, and conflicting. In less than a week, the post received more than 25,000 notes (staff 2012a). Responses conveyed the move as, variously, stupid and dangerous; unfair (“what about other blogs that promote alcohol and drugs?,” “It’s still okay to have a racist blog on Tumblr”); exclusionary (“will target primarily women”); unproductive and potentially harmful (“some things need to be talked about”); well-intentioned but misguided (“taking away another safe space”); urgently needed and smart; and impractical (“where does Tumblr plan to draw the line between what is acceptable and what is not?”) (staff 2012a). In their follow-up post, Tumblr Inc. seemed to have consulted with the National Eating Disorder Association and taken some of the user feedback on board, as they promised to find a balance between removing content, but keeping tumblr a place “where people struggling with these behaviors can find solace, community, dialog, understanding, and hope” (staff 2012b). Unlike Instagram, and perhaps as part of this promise, tumblr did not remove particular tags and started, instead, showing a PSA (“public service announcement,” i.e., information on resources and support organizations) asking “Everything okay?” on search results for particular keywords and hashtags (see Figure 1.45). While the impulse is admirable, the reality of the situation is much more complex. Users have dynamic and ever-developing techniques for circumventing hashtag moderation, and platforms’ automated recommendation systems still circulate self-harm content (Gerrard 2018). Clicking through the PSA and behaving on the platform as interested in self-harm will result in tumblr suggesting self-harm blogs to follow.


Figure 1.4: The public service announcement (PSA) returned when one searches for “proana” on tumblr. Screengrab by authors.

Scholars focused on sexual social media have remarked that American-owned platforms seem to presume that, in the list of offenses we started this section with, sexually explicit content will deter advertisers the most (see Paasonen et al. 2019; Tiidenberg and van der Nagel 2020).6 It is perhaps unsurprising that just as David Karp’s attitudes toward advertising differed from many of its competitors (see Chapter 3), so too did Tumblr Inc.’s early approach toward moderation of sexually explicit content.

tumblr’s early Content Policy and Guideline documents contained a single sentence claiming that those who regularly host and upload sexual videos would be suspended. The 2012 update to Community Guidelines elaborated by setting two rules: users who “regularly post sexual or adult-oriented content” were asked to flag their blogs as “Not Suitable For Work (‘NSFW’),”7 and users are welcome to embed links to sexually explicit video, but should avoid uploading, because tumblr is “not in the business of profiting from adult-oriented videos and hosting this stuff is fucking expensive.” The call to self-label ushered in the first version of the so-called Safe Mode, where the content of the blogs, which had been self-tagged as NSFW, was filtered out from the dashboards and search results of those users who selected that option. In 2012, Karp went on record saying he is not “into moderating” NSFW content and that tumblr is “an excellent platform for porn,” which he does not “personally have any moral opposition to” (Cheshire 2012). After the sale to Yahoo! in 2013, tumblr started tinkering with the visibility of sexual content in what Gillespie (2018: 173) has described as an attempt on Yahoo!’s part to both let “tumblr be tumblr” as well as sell ads. When invited to comment on the matter by talk show host Stephen Colbert, Karp maintained that tumblr had taken a pretty hard line on freedom of speech, arguing that he did not want to “go in there to draw the line between” art and behind the scenes photos of “Lady Gaga and like, her nip” (Dickey 2013). The Community Guideline clauses regarding NSFW content remained the same throughout updates in 2015, 2016, and 2017, although a link to “report unflagged NSFW content” was added in the 2016 update (tumblr 2016). In 2017 a stricter Safe Mode was introduced. The new system was quite complex, filtering blogs that were self-, moderator-, or automatically labeled as NSFW from the external and internal search results of all non-logged-on users and all logged-on users who were under the age of 18 (see Chapter 6).

In late 2018, to the great shock of tumblr users and scholars, Tumblr Inc. announced that it was banning all “photos, videos, or GIFs that show real-life human genitals or female-presenting nipples, and any content … that depicts sex acts” to “keep the community safe” (staff 2018). The source of this sudden and radical change is twofold: the US Senate passed the twin bills of FOSTA/SESTA (Fight Online Sex Trafficking Act and Stop Enabling Sex Traffickers Act) amending the CDA 230 to allow internet intermediaries to be held responsible for “promoting or facilitating prostitution” or “knowingly assisting, facilitating or supporting sex trafficking,”8 and tumblr’s mobile app was briefly banned from Apple’s App Store on the basis of claims that child pornography had been found on the site.9 LGBTIQA+, fandom, sex worker, artist, and academic circles pointed out that the ban would destroy a unique, safe, and empowering space that many often-marginalized individuals and groups used for exploration of self and sexuality (Ashley 2019; Liao 2018).10 Despite experts’ and users’ suggestions that there are better ways to deal with presumed child porn and increasing porn-bots,11 or that perhaps the growing subsection of racist hate speech warrants attention (Tiidenberg 2019a), Tumblr Inc. went ahead with the ban as planned.

Although many users hoped that the NSFW ruling would be reversed under Automattic, CEO Mullenweg refuted that hope by citing the app stores’ intolerance of NSFW content as the reason for the ban (Patel 2019). Sexually explicit content is still present on the platform, though its make-up and volume has changed. Based on our experiences, original visual content created by tumblr users themselves, often of themselves (see Chapter 6), is nearly gone. What remains is pornographic content: GIFs, videos, and still images from porn, which are much more explicit than selfies with female-presenting nipples ever were.

Tumblr

Подняться наверх