Читать книгу Computation in Science (Second Edition) - Konrad Hinsen - Страница 10

1.1.2 From numbers to symbols

Оглавление

It is indeed important to realize that the method outlined above does not work on numbers, but on a specific representation for numbers. Numbers are an abstract concept, which cannot be manipulated using mechanical rules. Different representations lead to different methods for doing arithmetic. Even though the decimal character string ‘42’, the roman-numeral character string ‘XLII’, and the English-language character string ‘forty-two’ refer to the same number, they cannot be manipulated in the same way. In fact, our recipe for addition never refers to numbers. It takes two sequences of digits as input, and produces one sequence of digits as output. Applying the recipe does not require any knowledge of numbers, only the capacity to work with a finite set of symbols and apply rules to them.

A recipe for solving a specific problem by manipulating symbols is called an algorithm. The word is derived from the name of the Persian mathematician al-Khwārizmı̄ who lived in the 9th century. His book describing the ‘Indian numbers’, which today we call Arabic numerals, introduced our modern decimal notation and its rules for arithmetic into Europe [1]. The use of this system was called ‘algorism’ in the late middle ages, and later the spelling and meaning transformed into today’s ‘algorithm’. The positional notation for numbers transformed arithmetic from a difficult craft performed by trained specialists into a routine task that could be mastered by almost everyone.

Today the decimal representation of numbers seems so obvious to us that we often make no difference between a number and its decimal representation. This phenomenon is not limited to numbers. We rarely distinguish carefully between a word and its meaning either, and in quantum physics, to cite an example from science, the confusion between a quantum state and one of its many possible representations is very common. When thinking about computation, it is often important to recall that the Universe of symbols and the Universe of meanings are separate. In computer science, this is known as the distinction between syntax and semantics. Syntax defines which sequences of symbols a particular algorithm deals with, for example ‘a sequence of any number of the digits 0 to 9’. Semantics defines how such sequences of symbols are interpreted, such as ‘a natural number’. No knowledge of semantics is needed to apply our algorithm for adding two natural numbers, but it is essential to understand what the algorithm does, and in particular which problems it can help to solve.

A symptom of the confusion between numbers and their representations is the popular saying that ‘computers work only on numbers’. This is patently false: what today’s digital computers work on is sequences of bits, a bit being a symbol from an alphabet containing two symbols. We often choose the digits 0 and 1 to represent this alphabet, suggesting an interpretation as binary numbers, i.e. numbers represented in a positional notation with base 2. The idea that computers work on numbers is mistaken because bits can equally well represent information other than numbers. It is also misleading in another way because it suggests that any number-related problem can be solved by a computer. However, most numbers cannot be represented by sequences of bits and therefore cannot enter in any computation.

It is easy to see that bits can represent any information at all that can be written as sequences of symbols. Suppose we have an alphabet with N symbols, for example the N = 26 letters of the English alphabet. We can then make up a translation table that assigns a unique set of values for five bits to each symbol in our alphabet. With five bits, we have 25=32 distinct values, so six values will be left unused. Our translation table allows us to encode any English word in terms of bit sequences.

It is less obvious and perhaps even surprising to many readers that most numbers cannot be represented as bit sequences. For natural numbers, there is no problem: any sequence of bits can be interpreted as a natural number in base 2 notation. Inversely, every natural number can be written in base 2 notation, and therefore as a sequence of bits. It is thus possible to define a one-to-one correspondence between natural numbers and bit sequences. In mathematical terminology, the set of natural numbers is isomorphic to the set of bit sequences. Since we can perform computations on sequences of bits, we can perform computations on natural numbers. In fact, any set of values that we wish to do computations on must be isomorphic to the set of bit sequences, or equivalently to the set of natural numbers, or to a subset of such a set. Such sets are called countable. All finite sets are countable: just write down the elements in some order and then write a number below to each element, starting from 1. Infinite sets that are countable are called countably infinite sets.

It is straightforward to see that integers are still countable: use one bit for the sign, and then a natural-number bit sequence for the absolute value. It takes a bit more effort to show that the rational numbers, i.e. the set of all quotients of integers, are countable. By the definition of countability, this requires the assignment of a unique natural number to each rational number. The standard procedure is based on a two-dimensional matrix-like arrangement of the rational numbers:

11213141…12223242…13233343…14243444…⋮⋮⋮⋮⋱

The entries of this infinite matrix can now be enumerated along diagonals:

1361015…25914…4813…712…11…⋮⋮⋮⋮⋮⋱

A more sophisticated enumeration scheme would skip over each number that is equal to one that already received an index earlier. For example, 2/2 would be skipped because it is equal to 1/1.

The proof that the set of real numbers is not countable is more involved, and I will not reproduce it here. Like many other proofs concerning infinite sets, it goes back to Georg Cantor, a German mathematician who laid the foundations of set theory in the late 19th century, and actually provided the first rigorous definition of the real numbers. The complex numbers, being a superset of the real numbers, are also uncountable. There are, however, countable number sets larger than the rational numbers. A well-known one in mathematics is the set of algebraic numbers, defined as the roots of polynomials with integer coefficients. In the context of computation, the largest useful countable subset of the real numbers is the set of computable numbers, which was introduced by Alan Turing in the same 1936 publication as the Turing machine. I will come back to this subject in chapters 2 and 3, because it is of central importance for the use of computation in science.

We can now write down a first definition of computation, which will be refined in chapter 3:

Computation is the transformation of sequences of symbols according to precise rules.

What will need refinement is the ‘precise rules’, which must be expressed so precisely that a machine can apply them unambiguously.

Computation in Science (Second Edition)

Подняться наверх