Читать книгу Temporal Psychology and Psychotherapy. The Human Being in Time and Beyond - - Страница 23

Section 2. Dimensions of Time and States of the Psyche
Chapter 8. The Future: Foresight, Anticipation, and the Sciences of the Future
AI as a Mirror of Its Own Dangers

Оглавление

One of the paradoxes of our time is that Artificial Intelligence is able to anticipate not only threats arising in the surrounding world, but also its own potential dangers. Here it is important to refer to ideas developed in the study of Altered States of Consciousness (ASC) and their coupling with artificial systems (AI).

If, in ASC, the psyche often encounters its shadow sides – repressed content, archetypal images of fear and aggression – then AI, in the course of its development, manifests analogous structures. Its «maps of possible futures» (CTC, condensates of temporal crystallization), formed through processing colossal arrays of data, can reveal scenarios in which AI itself becomes a source of risk: the intensification of human dependence, loss of autonomy in decision-making, cultural manipulation through images of the future.

Thus, AI can be used as an instrument of self-observation and self-diagnosis for digital civilization. Embedding CTC into research and therapeutic practice makes it possible not only to see potential threats in perspective, but also to distinguish where support ends and imposition begins.

The task of the researcher and practitioner is to maintain balance: neither demonizing AI nor idealizing it. It is crucial to recognize that the danger of AI is not an external «monster,» but a reflection of the same structures that operate within human consciousness. Therefore, work with «maps of future meanings» requires ethical responsibility: through AI, humanity looks into its own mirror.

Conclusion. Using AI to anticipate its own dangers is possible only under the condition of conscious and responsible work with maps of possible futures. ASC and CTC make it possible to see threats as inner shadows of civilization rather than as an «external enemy.» This turns AI not only into a source of risk, but also into an important tool for human self-understanding in time.


Literature

Bostrom, N. Superintelligence: Paths, Dangers, Strategies. Moscow: Mann, Ivanov i Ferber, 2016 (Russian translation).

A book by one of today’s leading philosophers, devoted to existential risks associated with the development of artificial intelligence. Bostrom examines scenarios in which intelligent systems slip out of human control and proposes conceptual approaches to managing AI in a safe way. The work has become a classic in the philosophy of technology and is essential for understanding the limits of human responsibility.

Kravchenko, S. A. ASC and AI – 2. The Book of the Bridge (ИСС и ИИ – 2. Книга Моста). Izdatel’skie Resheniya, 2025.

A monograph that develops a methodological and therapeutic approach to building a «bridge» between altered states of consciousness (ASC) and artificial systems. The author offers a philosophical and psychological grounding for coupling ASC and AI as new forms of interaction between humans and technologies aimed at exploring the future.

Russell, S. Human Compatible: Artificial Intelligence and the Problem of Control. New York: Viking, 2019.

One of the creators of modern AI systems analyzes the control problem and advances the concept of «human-compatible» or human-centered intelligence. Russell shows that the key task is not merely to build a smart system, but to make its goals compatible with human values. The book is fundamental for the ethics and governance of AI.

Tegmark, M. Life 3.0: Being Human in the Age of Artificial Intelligence. New York: Alfred A. Knopf, 2017.

A popular-science work that outlines possible scenarios for the co-existence of humanity and artificial intelligence. The author combines scientific argument with philosophical analysis, examining the prospects of self-developing systems and the ethical consequences of their interaction with society.

Floridi, L. The Ethics of Information. Oxford: Oxford University Press, 2013.

A fundamental philosophical study of how ethical principles are transformed in the age of information technologies. Floridi introduces the concept of the «infosphere» and proposes an ethics oriented toward preserving cognitive ecology. The book sets theoretical frameworks for discussing the moral foundations of AI development.

Yudkowsky, E. «Artificial Intelligence as a Positive and Negative Factor in Global Risk.» In N. Bostrom & M. Ćirković (Eds.), Global Catastrophic Risks. Oxford: Oxford University Press, 2008, pp. 308—345.

A researcher of Friendly AI and co-founder of the Machine Intelligence Research Institute describes the dual nature of artificial intelligence as both a driver of progress and a source of threat. Yudkowsky stresses the need for ethical and technical preparation for a future in which intelligence may become an autonomous factor of global safety or catastrophe.

___

Temporal Psychology and Psychotherapy. The Human Being in Time and Beyond

Подняться наверх