Читать книгу Neuro-inspired Information Processing - Alain Cappy - Страница 11

Оглавление

Introduction

The invention of the junction transistor in 1947 was undoubtedly the most significant innovation of the 20th Century, with our day-to-day lives coming to entirely depend on it. Since this date, which we will come back to later, the world has “gone digital”, with virtually all information processed in binary form by microprocessors.

In order to attain the digital world we know today, several steps were essential, such as the manufacture of the first integrated circuit in 1958. It soon became apparent that integrated circuits not only enabled the processing of analog signals, such as those used in radio, but also digital signals. Such digital circuits were used in the Apollo XI mission that led humankind onto the moon, on July 21, 1969. Astronauts only had very limited computing means at their disposal to achieve this spectacular feat. The flight controller was a machine that we might consider very basic by today’s standards. Composed of 2,800 integrated circuits, each comprising two three-input “NOR” gates, 2,048 words RAM1 and 38,000 words ROM2 for programs, it worked at a clock frequency of 80 kHz and weighed no more than 32 kg for 55 W power consumption.

The exploit was thus essentially based on “human” or “cortical” processing of information: processing power, too often advanced today, is not always the sine qua non condition for success!

In order to reduce the weight of processing systems, while improving their performance, it is necessary to incorporate a large number of logic gates into the same circuit. In 1971, this integration pathway led to a veritable revolution: the development of the first microprocessor. Since then, digital information processing technologies have witnessed tremendous progress, in terms of both their technical performance and their impact on society.

The world in which we live has become one of a “data deluge”, a term coined to describe the massive growth in the volume of data generated, processed and stored by digital media (audio and video), business transactions, social networks, digital libraries, etc. Every minute, for example, the Internet handles almost 200 million e-mails, 40 million voice messages, 20 million text messages, and 500,000 tweets. In 2016, the size of the digital universe, defined as the amount of data created, digitized and stored by human beings, was estimated at 16 ZB3 (zettabytes) and this figure is predicted to double every two years, i.e. 44 ZB in 2020 and 160 ZB in 2025. What a leap in just half a century!

This progression, symbolized by the famous “Moore’s law”4 , which predicted the doubling of microprocessor power5 every 18 months, occurred at constant price, i.e. the price of a modern microprocessor is much the same as that of the 1971 microprocessor, even though performance has been improved by more than five orders of magnitude.

This remarkable evolution was only made possible by the existence of a universal model of information processing machines, the Turing machine, and a technology capable of physically implementing these machines, that of semiconductor devices. More specifically, the “binary coding/Von Neumann architecture/CMOS technology” triplet has been the dominant model of information processing systems since the early 1970s.

Yet two limits have been reached at present: that of miniaturization, with devices not exceeding several nanometers in size, and that of power dissipated, with a barrier of the order of 100 Watts when the processor is working intensely.

As long as performance improved steadily, the search for new information processing paradigms was not ever a priority. With the foreseeable saturation of processor performance in the medium term, and also with the emergence of new application domains such as connected objects and artificial intelligence, the question of an information processing paradigm possessing both (i) high energy efficiency and (ii) superior performance in relation to current systems, in order to resolve certain types of problems, is resurfacing as a matter of some urgency.

This book, dedicated to neuro-inspired6 information processing, reflects these considerations. Its purpose is to offer students and researchers interested in this fascinating topic, a general overview of the current knowledge and state of the art, while heightening awareness of the innumerable questions posed and problems that remain unresolved.

Associating neuroscience, information technology, semiconductor physics and circuit design as well as mathematics and information theory, the subject matter addressed covers a wide variety of fields.

To enable the reader to progress uninterrupted through this book, they are regularly reminded of the basic concepts, or referred to the list of reference documents provided. Wherever possible, mathematical models of the phenomena studied are proposed, in order to enable an analysis that while simplified, offers a quantitative picture of the influence of the various parameters. This thinking aid using analytical formulations is, we believe, the condition for sound understanding of the physics of the phenomena involved.

This book is organized into four essentially independent chapters:

 – Chapter 1 introduces the basic concepts of electronic information processing, in particular coding, memorization, machine architecture and CMOS technology, which constitutes the hardware support for such processing. As one of the objectives of this book is to expand on the link between information processing and energy consumption, various ways of improving the performance of current systems are presented – particularly neuro-inspired processing, the central topic of this book. A fairly general comparison of the operating principles and performance of a modern microprocessor and of the brain is also presented in this chapter.

 – Chapter 2 is dedicated to the known principles of the functioning of the brain, and in particular those of the cerebral cortex, also known as “gray matter”. In this part, the approach is top-down, i.e. the cortex is first looked at from a global, functional perspective before we then study its organization as a basic processing unit, the cortical columns. An emblematic example, vision and the visual cortex, is also described to illustrate these different functional aspects.

 – Chapter 3 offers a detailed exploration of neurons and synapses, which are the building blocks of information processing in the cortex. Based on an in-depth analysis of the physical principles governing the properties of biological membranes, different mathematical models of neurons are described, ranging from the most complex to the simplest phenomenological models. Based on these models, the response of neurons and synapses to various stimuli is also described. This chapter also explores the principles of propagation of action potentials, or spike, along the axon, and examines how certain learning rules can be introduced into synapse models.

 – Finally, Chapter 4 covers artificial neural and synaptic networks. The two major approaches to creating these networks, using software or hardware, are presented, together with their respective performance. A state of the art is also given for each approach. In this chapter, we show the benefits of hardware in the design and creation of networks of artificial neural and synaptic networks with ultra-low power and energy consumption, and examples of artificial neural networks ranging from the very simple to the highly complex are described.

1 Memory that can be both written to and read from.

2 Read-only memory.

3 Zetta = 1021 and one byte is made up of 8 bits.

4 Gordon Moore co-founded Intel in 1968.

5 Represented by the number of logic gates per circuit.

6 Also referred to as “bio-inspired”.

Neuro-inspired Information Processing

Подняться наверх