Читать книгу Digital Transformations in the Challenge of Activity and Work - Группа авторов - Страница 15
1.2. From properties to the uses of emerging technologies
ОглавлениеDisruptive digital innovation manifests itself through a variety of technologies that are being deployed in a number of business sectors. Thus, collaborative robots (cobots, exoskeletons), communicating, ambient or ubiquitous technologies (Internet of Things), artificial intelligence (voice assistants, decision support systems), evaluative and predictive data exploitation (Big Data), immersive environments (virtual and/or augmented reality) and new modalities of human–machine interaction (haptics, sensory, cognitive technologies of BIM (brain interface machine) type – brain interface machine) find applications in various fields of our professional and socio-domestic life. They can be found with the factory of the future, the hospital of the future, the smart home, autonomous transport, connected health (HCS/home care service, HS/home support; Martineau and Bobillier Chaumon 2017) or in the services offered by digital work platforms (uberization, robotics; Casilli 2019).
In order to understand the impact of technologies on professional activity and to identify research and societal issues that are addressed to the scientific community, it is necessary beforehand to clarify what they cover in terms of uses and practices, as well as what they bring in terms of resources and constraints.
1) Collaborative robots (also called cobots) are assistants, which remain dependent on the intention, gesture or behavior of humans at work. They support the employee in his/her actions and adjust their interventions to those of the professional. It is no longer simply a robotic substitute or a form of mechanical assistance for particular tasks. Here, robotics becomes symbiotic (Brangier et al. 2009), that is, it extends (like an extension) the individual by enabling the increase of human capacities in terms of strength, speed or precision. In this new context of interaction where these mobile and learning systems evolve, new forms of cooperation and human–robot interfaces (HRIs) are to be imagined and deployed. The exoskeleton constitutes a special class of cobots. It is a device for electrically, pneumatically or hydraulically amplifying the movement of each segment of the body. This kind of external skeleton allows movements, load handling and tool management that the body alone would not be able to perform (Claverie et al. 2013). The return of sensation is then immediate, and we witness a certain form of global person–machine consciousness, a hybrid body schema, or what Merle (2012) calls “the illusion of uniqueness”. However, we must be careful, because these exoskeletons that are grafted onto the human body strongly constrain body movements and gestures. It is more about adjusting to this “mechanical corset” than the other way around: the actions of the body can be repressed or even prevented, with possible impacts on the physical health of employees (increase in musculoskeletal disorders – MSDs).
2) Ambient, ubiquitous or pervasive technologies constitute a second class of these emerging innovations, more generally known as the “Internet of Things” (IoT) or connected/communicating objects. These discrete (because they are non-intrusive) technologies are integrated into everyday objects (Nehmer et al. 2006). They seek to capture activity and trigger the appropriate actions, without the need for human intervention (Gossardt 2017) (e.g. the thermostat that communicates with the personal calendar to turn on the heating at the appropriate time). These can be sensors scattered throughout the living or working space to record physical activities (quantified-self; Zouinar 2019), to inform the individual and thus fight against sedentariness: people are then made aware of what they are doing, what they are not doing – well or not well enough – and what they should do better (concept of “technological persuasion”, nudge technology or captology; Fogg 2002). There are also digital tracers integrated into production lines to evaluate, in real time, the conformity of professional actions to expected standards. Maintenance is no longer merely corrective, it becomes predictive. That is, we are able to react before the error is made by measuring the nut and bolt too, if it is wrong or not tight enough. This is the idea of the connected factory of the future (Factory 4.0).
3) Artificial intelligence (AI) is highly sophisticated algorithmic programs based on artificial neural networks. They aim to solve specific problems for which human beings use their cognitive abilities (Zouinar 2020). What constitutes the strength of these devices is their “reasoning” power, which is based on deep learning or machine learning (see Chapter 6). These particular machine learning techniques make it possible to analyze, extract and classify large amounts of data. These programs are used to make diagnoses (skin cancer detection is more reliable than human expertise), make decisions (90% of stock market orders are now made by AI systems, such as high-frequency trading) or “naturally” assist humans in their daily activities, for example, Google’s voice assistant, capable of making appointments by phone, voice-recognition systems (connected speakerphone) or chat-bot systems that automatically answer customers’ questions on websites. These devices are “artificial intelligence” because they are capable of some form of learning, or even “intuition” to deal with complex and sometimes unprogrammed situations. This is the case of the autonomous car, which has to manage a wide range of unforeseen events and incidents on its course. However, it should also be recalled that although AI is capable of learning, evolving and taking initiatives1, it is incapable of giving meaning to what it sees or does or to the information it processes. Concretely, it can recognize the presence of a cat in millions of photos, but it does not know what a cat is. Symbolic processing and cognitive flexibility remain for the moment the prerogative of humans.
4) In order to function, AI must be able to access large volumes of data provided by, among others, Big Data. Big Data refers to the ability to produce or collect digital data, store, analyze and visualize them (Cardon 2015). The traces we leave on the Internet (“likes”, comments), the data sent by connected objects… on all aspects and at all times of our lives constitute a considerable mass of information2 on how we act, think and even on what we experience and feel (Dagiral et al. 2019). However, this raw data have little meaning and value as it stands. The challenge is to assign value to them through correlation, in order to transform them into information, and then into knowledge about the subject, that is relevant, useful and focused data. These are “smart data”. Combined with predictive models, these systems are capable of evaluating and anticipating individual behavior in some detail, or even seek to modify attitudes and decisions (as in the case of the Cambridge Analytica scandal3).
5) Immersive environments (based on virtual, augmented and tangible reality) consist of immersing a person in an artificial environment, by stimulating their sensory (via sound, vision, smell), cognitive (information and decision-making) and sensory-motor (haptics, gestures) modalities via appropriate digital devices (3D headsets, haptic gloves, etc.) (see Chapter 3). This world can be imaginary, symbolic or simulate certain aspects of the real world. Different types of immersive environments can be distinguished:
– Virtual reality makes it possible to extract ourselves from physical reality in order to virtually change time, place and/or types of interaction. It gives a person the opportunity to engage in sensory-motor and cognitive activity in an artificial, digitally recreated world (Fuchs et al. 2006). It allows us to simulate what we would do in a normal and real situation. These devices are often used in the field of vocational training: employees find themselves in situations close to their actual working conditions, which are difficult to reproduce (Ganier et al. 2013). For example, it may be necessary to simulate the altitude to carry out a technical intervention at the top of an electrical pylon, or to carry out a difficult operation on a patient, with medical complications. Virtual reality can also be used in the medical field to treat anxiety-provoking situations (treatment of post-traumatic stress disorder (Moraes et al. 2016) or delusions of persecution (Freeman et al. 2016)).
– Augmented reality consists of adding/enriching virtual information above the real physical environment using a video headset, a computer or any other projection system (Marsot et al. 2009). There are applications for productive or maintenance tasks (such as indicating to the operator very precisely the location of rivets to be screwed on the aircraft’s cabin: the targets are then projected virtually on the surface) or when the operator wearing a headset is presented virtually with the sequences and location of the various operations to be performed to change a part on a large industrial machine (the steps and circuit to be changed appear superimposed on the engine). We therefore interact with the virtual in order to know how to act on the real thing.
– Augmented virtuality (tangible environment) consists of integrating real entities into a virtual environment; both can interact together (Fleck and Audran 2016). For example, an architect will physically manipulate models of houses in a virtually recreated living space in order to evaluate the best exposure to sun and wind and thus calculate possible energy losses. In this case, we interact with reality to act on the virtual.
The integration of these various immersive technologies (virtual reality, augmented reality, augmented virtuality) is called mixed reality (Moser et al. 2019).
All these new generations of technologies are thus intended to replace/improve all or part of human functions (physical, sensory and/or cognitive). The objective is to optimize the capacities at work (learning, understanding, decision-making, action, etc., both individual and collective) and to make work processes more efficient and effective in order to increase reactivity and profitability. According to a very deterministic approach, it also appears that the choice of such systems aims at the emergence of a working model oriented towards individual excellence, organizational agility, collective intelligence and also an efficient mutualization of the activity (between humans and machines). This would also explain the enthusiasm of companies for such systems, as Champeaux and Bret (2000, p. 45) have already mentioned, for technologies that are now more traditional: “Adopting them is no longer an opportunity, but an obligation. It is no longer a question of whether we are going to go there, but of how we are going to go there, that is, with what strategy, what investments, what objectives”.
However, while it appears that technology can affect certain dimensions of the activity, it cannot determine or shape it according to predefined and expected patterns. There is no technological determinism in the strict sense of the term. In other words, a technological innovation does not in itself impose a single type of organization or business model, but makes various forms of it possible. It is indeed the use (i.e. the conditions of use of the tool – individual, collective, organizational, etc., the project and the experiences of the user…) and not the intrinsic characteristics of the technology that will determine its effects, which can therefore be contrasted. It is these paradoxes that we will now examine in the following section.