Читать книгу Bots - Nick Monaco - Страница 11

Bots and the early internet: Infrastructural roles on Usenet

Оглавление

Other early bots did not have the glamor of ELIZA. For most of the 1970s and 1980s, bots largely played mundane but critical infrastructural roles in the first online environments. Bots are often cast in this “infrastructural” role,3 serving as the connective tissue in human–computer interaction (HCI). In these roles, bots often serve as an invisible intermediary between humans and computers that make everyday tasks easier. They do the boring stuff – keeping background processes running or chatrooms open – so we don’t have to. They are also used to make sense out of unordered, unmappable, or decentralized networks. As bots move through unmapped networks, taking notes along the way, they build a map (and therefore an understanding) of ever-evolving networks like the internet.

The limited, nascent online environment from the late 1970s onward was home to a number of important embryonic bots, which would form the foundation for modern ones. The early internet was mainly accessible to a limited number of academic institutions and government agencies (Ceruzzi, 2012; Isaacson, 2014, pp. 217–261), and it looked very different: it consisted of a limited number of networked computers, which could only send small amounts of data to one another. There were no graphical user interfaces (GUIs) or flashy images. For the most part, data was text-based, sent across the network for the purposes of communication using protocols – the standards and languages that computers use to exchange information with other computers. Protocols lay at the heart of inter-computer communication, both then and now. For example, a file is sent from one computer to another using a set of pre-defined instructions called the File Transfer Protocol (FTP), which requires that both the sending computer and the receiving computer understand FTP (all computers do, nowadays). Another of the most widespread and well-known protocols on the modern internet is the hypertext transfer protocol (HTTP). HTTP was first developed in 1989 by Tim Berners-Lee, who used it as the basis for developing the World Wide Web. Before HTTP and the World Wide Web became nearly universal in the 1990s, computers used different protocols to communicate with each other online,4 including Usenet and Internet Relay Chat (IRC). Both of these early online connection forums still exist today, and both played a critical role in incubating bot development. These were early breeding grounds for bot developers and their creations.

Usenet was the first largely available electronic bulletin-board service (often written simply as “BBS”). Developed in 1979 by computer-science graduate students at Duke and the University of North Carolina, Usenet was originally invented as a way for computer hobbyists to discuss Unix, a computer operating system popular among programmers. Users could connect their computers to each other via telephone lines and exchange information in dedicated forums called “news groups.” Users could also use their own computers to host, an activity known as running a “news server.” Many users both actively participated in and hosted the decentralized service, incentivizing many of them to think about how the platform worked and how it could be improved.

This environment led to the creation of some of the first online bots: automated programs that helped maintain and moderate Usenet. As Andrew Leonard describes, “Usenet’s first proto-bots were maintenance tools necessary to keep Usenet running smoothly. They were cyborg extensions for human administrators” (Leonard, 1997, p. 157). Especially in the beginning days, bots primarily played two roles: one was posting, the other was removing content (or “canceling,” as it was often called on Usenet) (Leonard, 1996). Indeed, Usenet’s “cancelbots” were arguably the first political bots. Cancelbots were a Usenet feature that enabled users to delete their own posts. If a user decided they wanted to retract something they had posted, they could flag the post with a cancelbot, a simple program that would send a message to all Usenet servers to remove the content. Richard Depew wrote the first Usenet cancelbot, known as ARMM (“Automated Retroactive Minimal Moderation”) (Leonard, 1997, p. 161).

Though the cancelbot feature was originally meant to enable posters to delete their own content, with just a little technical savvy it was possible to spoof identities and remove others’ posts. This meant that, in effect, a bot could be used to censor other users by deleting their content from the web. Once the secret was out, users and organizations began cancelling other’s users’ posts. For example, a bot called CancelBunny began deleting mentions of the Church of Scientology on Usenet, claiming they violated copyright. A representative from the Church itself said that it had contacted technologists to “remove the infringing materials from the Net,” and a team of digital investigators traced CancelBot back to a Scientologist’s Usenet account (Grossman, 1995). The incident drew ire from Usenet enthusiasts and inspired hacktivists like the Cult of the Dead Cow (cDc) to declare an online “war” on the Church, feeling the attempt at automated censorship violated the free speech ethos of Usenet (Swamp Ratte, 1995). Another malicious cancelbot “attack” from a user in Oklahoma deleted 25,536 messages on Usenet (Woodford, 2005, p. 135). Some modern governments use automation in similar ways, and for similar purposes as these cancelbots and annoybots: using automation to affect the visibility of certain messages and indirectly censor speech online (M. Roberts, 2020; Stukal et al., 2020).

Another prolific account on Usenet, Sedar Argic, posted political screeds on dozens of different news groups with astonishing frequency and volume. These posts cast doubt on Turkey’s role in the Armenian Genocide in the early twentieth century, and criticized Armenian users. Usenet enthusiasts still debate today whether the Argic’s posts were actually automated or not, but its high-volume posting and apparent canned response to keywords such as “Turkey” in any context (even on posts referring to the food) seem to point toward automation.

Over time, more advanced social Usenet bots began to emerge. One of these was Mark V. Shaney, a bot designed by two Bell Laboratories researchers that made its own posts and conversed with human users. Shaney used Markov Chains, a probabilistic language generation algorithm, which strings together sentences based on what words are most likely to follow the words before it. The name Mark V. Shaney was actually a pun on the term Markov Chain (Leonard, 1997, p. 49). The Markov Chain probabilistic technique is still widely used today in modern natural language processing (NLP) applications (Jurafsky & Martin, 2018, pp. 157–160; Markov, 1913).

Bots

Подняться наверх