Читать книгу Tumblr - Tama Leaver, Crystal Abidin - Страница 26

Algorithms

Оглавление

Algorithms are increasingly used to police user compliance to platform rules, but more broadly they shape flows of information, assign meaningfulness to content, and mold our participation in public life (Gillespie 2012; Langlois 2012). However, algorithms are usually invisible. They become noticeable to everyday users, when there are shifts in how they organize information, show or hide content, increase or reduce the visibility of the user’s own content, recommend accounts or posts, or insert moneyed speech into one’s line of sight. As algorithms themselves tend to be proprietary, researchers study their implications via users’ algorithmic imaginaries (Bucher 2017) or algorithmic lore (Bishop 2020). This is what we will do to describe users’ perceptions of and experiences with tumblr algorithms over the past decade.

tumblr’s algorithms were experienced as comparatively unobtrusive until 2017. We link this to Tumblr Inc.’s particular approach to advertising and classifying users – up until 2015 they almost performatively refused targeted advertising (see Chapter 3). But it can also be linked to tumblr’s vision, responses to user criticism. and historical “spam” problems (Perez 2011). There have almost always been spaces within the tumblr interface for recommended content, but users’ reactions to it have been ambivalent. The now-defunct “spotlight” was introduced in 2011 and clearly articulated by tumblr, and experienced by users, as editorial and not algorithmic (staff 2011). Being featured on Spotlight was generally considered a good thing by users. In 2010 and 2011 even NSFW blogs could get recommended, if they were popular and original enough. “Radar” has been around since at least 2010 and “Recommended blogs” since 2011 (we were unable to precisely date these features). Users have imagined both to combine editorial and algorithmic techniques. There were many posts on and off tumblr articulating either how to increase one’s chances to be featured in those spaces (e.g., tagging content with the #RadarPlz hashtag), or how to use a variety of browser add-ons to suppress them from one’s Dashboard experience (see Chapter 3). Until the introduction of “Best Stuff First” in 2017, tumblr did not (noticeably) reorganize what users saw on their dashboard (staff 2017). Since 2020, tumblr recommendations have been made across sponsored posts, blogs, searches, and tags, which are demarcated as “sponsored.” Within the mobile app there are additional categories of “recommended group chats,” “recommended for you,” and “watch on tumblr.”

An early case of algorithmic imaginary (Bucher 2017) emerged after tumblr’s 2012 policy against self-harm blogs. We noticed vernacular techniques circulating for backup hashtags and otherwise circumventing the algorithms among some thinspo blogs (Kanai et al. 2020; see also Chapter 7). Users’ imaginaries of tumblr algorithms shifted more drastically with the 2017 Safe Mode, when algorithms were obviously and intrusively employed to filter content (see Chapter 6). Certain keywords, which returned results via browser, returned nothing on mobile apps because of app store restrictions. This included “#gay,” because the data that the filtering algorithm was trained on had determined that the hashtag often accompanied pornographic content, but the LGBTIQA+ community rightfully interpreted this as an outright attack. tumblr managed to placate users by reversing some of the changes, promising to work on more intelligent solutions for battling porn bots and filtering content, and primarily by demonstrating that they were listening. Their resolution of this particular governance conflict showed that they understood that moderation involves a politics of visibility (Gillespie 2018), which in the case of sexual self-expression often follows the fault lines of systematic marginalization (e.g., disenfranchising the LGBTIQA+ community).

This understanding seemed to have evaporated by the time of the NSFW ban in 2018. tumblr’s Help page claimed that the new ban was enforced through a “mix of machine-learning classification and human moderation from our team of trained experts,” wherein appeals regarding misflagged posts would be reviewed by humans (tumblr Help Center 2018). tumblr’s classification algorithms (usually referred to as flagging algorithms or flagging bots in vernacular discourse) were shockingly bad and the public backlash against them spanned platforms (Tiidenberg 2019a). While differentiating permitted nudity (mastectomy or gender-confirming scars, breastfeeding) from prohibited nudity (“female presenting nipples,” any depictions of sex) is indeed a matter of contextual awareness, which automation is bound to fail at, tumblr’s algorithms flagged images of food, knitting, and Joe Biden. Based on industry standards of image recognition, this was a spectacularly poor performance. Trade press and experts speculated that it stemmed from tumblr having trained their algorithms on insufficient datasets, using inaccurate platform models or weak classifiers; further, tumblr’s new, black box algorithms were presumed to have identified patterns between objects in images that the developers did not teach it to identify, which led it to mistakenly flag content (Matsakis 2018). This led to users speculating and experimenting with various visual-aesthetic and discursive-logistic workarounds for circumventing tumblr’s image recognition algorithm. A networked and loosely shared pool of algorithmic imaginaries and hacks emerged, where, in order to hide from the algorithmic gaze, bloggers would take sexy selfies in flesh-colored silk stockings or bodysuits, cover nipples with stickers or digital special effects (Figure 1.5), and superimpose text or QR codes onto the images.

At the same time, viewers were asked to refrain from liking, reblogging, or replying to posts, in the hope of remaining hidden from the algorithm. After the initial period, when flagged posts were equipped with a reporting button to address the mistakes, all flagged posts and blogs were forcibly made private, essentially rendering them invisible and inaccessible to everyone but the author. NSFW content hashtagged with culturally specific NSFW Chinese or Japanese words or in Chinese or Japanese characters (“#変態” or “hentai” and “#露點” or “ludian,” connoting exposed private parts) remained on the site for a couple of months longer, but was eventually removed as well. Needless to say, tumblr’s algorithms were no longer experienced as unobtrusive after that.


Figure 1.5: Artist’s impression of how some tumblr users are using emoji to cover nipples and circumvent flagging, while still being able to share nudes. Art provided by River Juno.

Tumblr

Подняться наверх