Читать книгу Machine Habitus - Massimo Airoldi - Страница 12

Digital Era (1946–1998)

Оглавление

Through the 1930s and the 1940s, a number of theoretical and technological advances in the computation of information took place, accelerated by the war and its scientific needs (Wiener 1989). The Harvard Mark I became the ‘first fully automatic machine to be completed’, in 1943. However, it was still ‘programmed by a length of paper tape some three inches wide on which “operation codes” were punched’ (Campbell-Kelly et al. 2013: 57). The pathbreaking conceptual work of the British mathematician Alan Turing was crucial to the development of the first modern electronic computer, known as ENIAC, in 1946. It was a thousand times faster than the Harvard Mark I, and finally capable of holding ‘both the instructions of a program and the numbers on which it operated’ (Campbell-Kelly et al. 2013: 76). For the first time, it was possible to design algorithmic models, run them, read input data and write output results all in digital form, as combinations of binary numbers stored as bits. This digital shift produced a significant jump in data processing speed and power, previously limited by physical constraints. Algorithms became inextricably linked to a novel discipline called computer science (Chabert 1999).

With supercomputers making their appearance in companies and universities, the automated processing of information became increasingly embedded into the mechanisms of post-war capitalism. Finance was one of the first civil industries to systematically exploit technological innovations in computing and telecommunications, as in the case of the London Stock Exchange described by Pardo-Guerra (2010). From 1955 onwards, the introduction of mechanical and digital technologies transformed financial trading into a mainly automated practice, sharply different from ‘face-to-face dealings on the floor’, which had been the norm up to that point.

In these years, the ancient dream of creating ‘thinking machines’ was spread among a new generation of scientists, often affiliated to the MIT lab led by professor Marvin Minsky, known as the ‘father’ of AI research (Natale and Ballatore 2020). Since the 1940s, the cross-disciplinary field of cybernetics had been working on the revolutionary idea that machines could autonomously interact with their environment and learn from it through feedback mechanisms (Wiener 1989). In 1957, the cognitive scientist Frank Rosenblatt designed and built a cybernetic machine called Perceptron, the first operative artificial neural network, assembled as an analogue algorithmic system made of input sensors and resolved into one single dichotomic output – a light bulb that could be on or off, depending on the computational result (Pasquinelli 2017). Rosenblatt’s bottom-up approach to artificial cognition did not catch on in AI research. An alternative top-down approach, now known as ‘symbolic AI’ or ‘GOFAI’ (Good Old-Fashioned Artificial Intelligence), dominated the field in the following decades, up until the boom of machine learning. The ‘intelligence’ of GOFAI systems was formulated as a set of predetermined instructions capable of ‘simulating’ human cognitive performance – for instance by effectively playing chess (Fjelland 2020). Such a deductive, rule-based logic (Pasquinelli 2017) rests at the core of software programming, as exemplified by the conditional IF–THEN commands running in the back end of any computer application.

From the late 1970s, the development of microprocessors and the subsequent commercialization of personal computers fostered the popularization of computer programming. By entering people’s lives at work and at home – e.g. with videogames, word processors, statistical software, etc. – computer algorithms were no longer the reserve of a few scientists working for governments, large companies and universities (Campbell-Kelly et al. 2013). The digital storage of information, as well as its grassroots creation and circulation through novel Internet-based channels (e.g. emails, Internet Relay Chats, discussion forums), translated into the availability of novel data sources. The automated processing of large volumes of such ‘user-generated data’ for commercial purposes, inaugurated by the development of the Google search engine in the late 1990s, marked the transition toward a third era of algorithmic applications.

Machine Habitus

Подняться наверх