Читать книгу Cybernetics and transport processes automation. Tutorial - Vadim Shmal - Страница 2
About cybernetics
ОглавлениеCybernetics is the science of communication and control. He also explores the self-perception of people and social groups. This concerns how human activities and communication affect collective behavior. The social context of cybernetics is vast and growing. Cybernetics is a dynamic and diverse field of research. New trends and scientific discoveries will influence the field of cybernetics in the coming years.
Cybernetics is a portfolio word that combines cybernetic with biology. American mathematician John von Neumann published Automata, an article on cybernetics, in which he outlined the fundamental paradigm of the theory: there are situations that are controlled by a central computer. Von Neumann applied the term «automaton» to any device or system that «can be analyzed like automata».
Cybernetics takes a holistic approach and works with communication at an elementary level. Early cybernetics also explored how language affects the way people interact. Topics of how society and people perceive and interact with information technology are of great interest. A special issue of «Cybernetics» examines the meaning and development of the word «cybernetics». These reviews shed light on this rather little-known branch of science. Despite these theoretical advances and new developments, this area is still poorly understood. Only 10% to 30% of researchers working in this field publish more than three articles a year. A 2006 study found that there is a dead end in attracting the attention of leading journals to new research proposals.
The applied part of cybernetics deals with the control and movement of systems, as well as how to regulate or control their behavior. Along with systems theory, statistics, and operations research, cybernetics is one of the three main disciplines of science and technology and the first scientific discipline to deal with controlling and influencing the behavior of a system. The main goal of the broad field of cybernetics is to understand and define human intelligence. According to cybernetics, the process of understanding how to build and maintain the human brain and its intellectual capacity is complex and multidimensional.
Cybernetics is defined as the study of interactions between people and things, the study of the interaction between people and their environment, the study of systems, the systematization of actions. The importance of understanding these interrelationships is what made cybernetics one of the most widespread sciences in the 20th century. The scientific study of any human phenomenon – action, planning, protection, communication, etc. – was included in the disciplinary study of cybernetics.
Cybernetics has been defined in different ways, by different people, from a wide variety of disciplines. It is a broad concept that encompasses many areas. On one level, it has to do with the nature of all life; transfer and control of information within biological systems and between them. On the other hand, we are talking about the control of processes at the atomic and molecular levels and the network connections between them.
Research automation proves that a key innovation in machine intelligence is achieving or exceeding the ability of humans to control and manipulate data. The fundamental role of a computer (or smart machine) is not to make calculations; but manage the information processed by the machine. The information network is the basis of intelligence. AI’s primary focus is to develop systems that can monitor the network and dynamically change its connections to improve its performance in response to changing circumstances.
The «correlation versus causality» discussion in cybernetics means that we need to interpret the data without succumbing to Cartesian dualism. In terms of neoclassical economics, the main driving forces of business are the subjective preferences of people, driven by incentives. The emergent point of view is an emergent system in which different levels of causal structure appear and disappear over time. Bostrom uses this model to see the nature of intelligence.
Robots and other artificial intelligence systems must evolve following a strategy of making the system as responsive to the environment as possible. They must constantly adapt and improve, following the rules given to them, the strategy adopted for this reason, because a human programmer cannot foresee all future events. The rule-based nature of AI is a key ingredient in its evolution and also, in other words, its goal (although this goal is often overlooked). The ability to learn from experience (learning by doing) is fundamental to intelligent behavior.
The development of AI led by humans will not be associated with the construction of a high-performance «superintelligence»; but about strengthening and expanding the system in relation to those fundamental principles of cybernetics that we expect from people: learning, adaptation and repetition. A certain «learning to learn» (programmability, emergent behavior) is the foundation of cybernetics.
Once the AI is created, the system must evolve like any other living system; learning to adapt to the environment as it develops through natural selection, kind of like a Darwinian process. The emulation (evaluation) process is critical to what happens in AI. We can simulate an AI system by simulating a problem. We did this by simulating a chess program. However, the result is limited. He is only able to reproduce simple chess-related activities. This is possible because we have limited the number of things the system can do. We have only simulated the output of the program.
It is impossible to create a robot unless we first understand the basic process by which the system learns, building it, based on trial and error. To learn, a system must understand what it is doing and have some ability to reverse the processes it is learning. The process of developing an AI system should be copying a simpler system with its own rules.
Since we do not design «upgrades» to our artificial intelligence systems, they evolve by copying some simpler system. The adaptive system does not repeat a fixed sequence of events in order to learn; rather, it needs to learn about different patterns, behaviors and habits. This imitation process is based on a stimulus-response function.
The principle of adaptive learning (or learning by doing) is a good example of imitation in action. It is the process by which any machine, any computer or intelligent agent learns how it should behave based on its experience. Learning by imitation is similar to this principle, but it is based on the fact that a person (or group of people) imitates another group or person in order to learn something new. The emulated group or person has their own individual rules of operation (rules of imitation) that determine what types of reactions or behaviors are learned.
Adaptive emulators (learning by emulation) play a crucial role in the development of intelligence. This is the most important mechanism for learning and developing knowledge. According to Bostrom, they will also play a crucial role in the evolution of intelligent systems.
Emulation cannot learn if the observer does not, and the observer must be able to learn. This is called an observer loophole. This is the simplest explanation for the so-called social intelligence problem. In practice, the observer loophole makes the emulations look like real intelligent agents. But they have all their inherent limitations.
Emulation also fails if the emulated system has problems that the observer is not aware of. If the observer cannot tell that the emulated system has problems, he cannot learn from these problems.
This brings us to the final problem with emulation: learning by emulation is only one mechanism by which intelligent systems can evolve. A true adaptive agent is intelligent because it is designed to evolve with the characteristics of an evolving system.
Emulation is useful for teaching how to build intelligent systems similar to intelligent systems. However, he cannot learn what an intelligent system can and cannot do, unlike an intelligent agent. This brings us to a very important question: is emulation really a method of studying complex intelligent systems?
And for the design philosophy to work, it was important to make sure that we can not only learn about responsive emulators, but also improve them. Therefore, we carefully studied adaptive emulators and developed a system that could learn from them. This learning process began with the ability to define adaptive emulators. Then, by chance, a slightly better method of identifying them came along, which allowed us to create a responsive emulator with very high usability.
Cybernetics has evolved in ways that distinguish first-order cybernetics (about observable systems) from second-order cybernetics (about observing systems) – in particular, second-order cybernetics is usually associated with systems that control and act on each other – and differs from modern cybernetics mainly when it comes to questions about whether reflexivity or reflection has an explanatory role.
In cybernetics, the phenomena of time and space are identical to physical phenomena in that there are fundamental problems of theory and measurement.
The first order cybernetics concept is the observer concept. Second order cybernetics is the theoretical practice of cybernetics. Second-order cybernetics makes no distinction between cybernetics and modeling. This approach was a fundamental principle of cybernetics as applied to physical systems. It also suggests that cybernetics is not a model, but a tool for understanding phenomena and systems.
Cybernetic intelligence can use a logical-linguistic system to transform communication data into machine instructions. Such a system can use the well-known Fisher-Simon transform to convert instructions to data. This allows the system to directly translate from syntactic forms, which in turn allows the system to understand the language in a statistical sense. This idea theoretically suggests that a cybernetic system can act on a third party (for example, a person), and he can act as an intermediary between a person and a computer, or vice versa.
A cybernetic automaton is a hypothetical (albeit mathematically possible) system that simulates a physical system (for example, a machine).
The design of self-regulating control systems for a planned economy in real time was studied in the 1950s. A good example is the rational programmer method, which claims that the rational planning method can be used to design control systems. This method, although somewhat abstracted, can be understood in terms of feedback control theory. The main idea behind the rational programmer method was that real-time planned economies like those developed in the Soviet Union could be planned using the rational programmer method. A rational planner manages a system of rational rules by thinking in terms of programs and control systems.
In a rational control system, the planner does not need to be aware of all the activities that the system is performing. Instead, the planner must make decisions based on observable data and improve the system, for example by creating more «rational» rules and more «efficient» data processing mechanisms. Many «pre-programmed» control systems use feedback to automatically improve the system over time. Examples include most industrial automation and industrial robots, as well as many process control systems.