Читать книгу Chance, Calculation and Life - Группа авторов - Страница 17

1.3. Quantum randomness

Оглавление

Quantum randomness is hailed to be more than “epistemic”, that is, “intrinsic” (to the theory). However, quantum randomness is not part of the standard mathematical model of the quantum which talks about probabilities, but is about the measurement of individual observables. So, to give more sense to the first statement we need to answer (at least) the following questions: (1) What is the source of quantum randomness? (2) What is the quality of quantum randomness? (3) Is quantum randomness different from classical randomness?

A naive answer to (1) is to say that quantum mechanics has shown “without doubt” that microscopic phenomena are intrinsically random. For example, we cannot predict with certainty how long it will take for a single unstable atom in a controlled environment to decay, even if one has complete knowledge of the “laws of physics” and the atom’s initial conditions. One can only calculate the probability of decay in a given time, nothing more! This is intrinsic randomness guaranteed.

But is it? What is the cause of the above quantum mechanical effect? One way to answer is to consider a more fundamental quantum phenomenon: quantum indeterminism. What is quantum indeterminism and where does it come from? Quantum indeterminism appears in the measurement of individual observables: it has been at the heart of quantum mechanics since Born postulated that the modulus-squared of the wave function should be interpreted as a probability density that, unlike in classical statistical physics (Myrvold 2011), expresses fundamental, irreducible indeterminism (Born 1926). For example, the measurement of the spin, “up or down”, of an electron, in the standard interpretation of the theory, is considered to be pure contingency, a symmetry breaking with no antecedent, in contrast to the causal understanding of Curie’s principle5. The nature of individual measurement outcomes in quantum mechanics was, for a period, a subject of much debate. Einstein famously dissented, stating his belief that “He does not throw dice” (Born 1969, p. 204). Over time the assumption that measurement outcomes are fundamentally indeterministic became a postulate of the quantum orthodoxy (Zeilinger 2005). Of course, this view is not unanimously accepted (see Laloë 2012).

Following Einstein’s approach (Einstein et al. 1935), quantum indeterminism corresponds to the absence of physical reality, if reality is what is made accessible by measurement: if no unique element of physical reality corresponding to a particular physical observable (thus, measurable) quantity exists, this is reflected by the physical quantity being indeterminate. This approach needs to be more precisely formalized. The notion of value indefiniteness, as it appears in the theorems of Bell (Bell 1966) and, particularly, Kochen and Specker (1967), has been used as a formal model of quantum indeterminism (Abbott et al. 2012). The model also has empirical support as these theorems have been experimentally tested via the violation of various inequalities (Weihs et al. 1998). We have to be aware that, going along this path, the “belief” in quantum indeterminism rests on the assumptions used by these theorems.

An observable is value definite for a given quantum system in a particular state if the measurement of that observable is pre-determined to take a (potentially hidden) value. If no such pre-determined value exists, the observable is value indefinite. Formally, this notion can be represented by a (partial) value assignment function (see Abbott et al. (2012) for the complete formalism).

When should we conclude that a physical quantity is value definite? Einstein, Podolsky and Rosen (EPR) defined physical reality in terms of certainty of predictability in Einstein et al. (1935, p. 777):

If, without in any way disturbing a system, we can predict with certainty (i.e., with probability equal to unity) the value of a physical quantity, then there exists an element of reality corresponding to that quantity.

Note that both allusions to “disturbance” and to the (numerical) value of a physical quantity refer to measurement as the only form of access to reality we have. Thus, based on this accepted notion of an element of physical reality, following (Abbott et al. 2012) we answer the above question by identifying the EPR notion of an “element of physical reality” with “value definiteness”:

EPR principle: If, without disturbing a system in any way, we can predict with certainty the value of a physical quantity, then there exists a definite value prior to the observation corresponding to this physical quantity.

The EPR principle justifies:

Eigenstate principle: a projection observable corresponding to the preparation basis of a quantum state is value definite.

The requirement called admissibility is used to avoid outcomes that are impossible to obtain according to quantum predictions, but which have overwhelming experimental confirmation:

Admissibility principle: definite values must not contradict the statistical quantum predictions for compatible observables of a single quantum.

Non-contextuality principle: the measurement results (when value definite) do not depend on any other compatible observable (i.e. simultaneously observable), which can be measured in parallel with the value definite observable.

The Kochen-Specker Theorem (Kochen and Specker 1967) states that no value assignment function can consistently make all observable values definite while maintaining the requirement that the values are assigned non-contextually. This is a global property: non-contextuality is incompatible with all observables being value definite. However, it is possible to localize value indefiniteness by proving that even the existence of two non-compatible value definite observables is in contradiction with admissibility and non-contextually, without requiring that all observables be value definite. As a consequence, we obtain the following “formal identification” of a value indefinite observable:

Any mismatch between preparation and measurement context leads to the measurement of a value indefinite observable.

This fact is stated formally in the following two theorems. As usual we denote the set of complex numbers by ℂ and vectors in the Hilbert space ℂn by ǀ.>; the projection onto the linear subspace spanned by a non-zero vector ǀφ> is denoted by Pφ. For more details see Laloë (2012).

THEOREM 1.1.– Consider a quantum system prepared in the state ǀψ> in dimension n ≥ 3 Hilbert space ℂn, and let ǀφ> in any state neither parallel nor orthogonal to ǀψ>. Then the observable projection Pφ is value indefinite under any non-contextual, admissible value assignment.

Hence, accepting that definite values exist for certain observables (the eigenstate principle) and behave non-contextually (non-contextuality principle) is enough to locate and derive, rather than postulate, quantum value indefiniteness. In fact, value indefinite observables are far from being scarce (Abbott et al. 2014b).

THEOREM 1.2.– Assume the eigenstate principle, non-contextuality and admissibility principles. Then, the (Lebesgue) probability that an arbitrary value indefinite observable is 1.

Theorem 1.2 says that all value definite observables can be located in a small set of probability zero. Consequently, value definite observables are not the norm, they are the exception, a long time held intuition in quantum mechanics.

The above analysis not only offers an answer to question (1) from the beginning of this section, but also indicates a procedure to generate a form of quantum random bits (Calude and Svozil 2008; Abbott et al. 2012, 2014a): to locate and measure a value indefinite observable. Quantum random number generators based on Theorem 1.1 were proposed in (Abbott et al. 2012, 2014a). Of course, other possible sources of quantum randomness may be identified, so we are naturally led to question (2): what is the quality of quantum randomness certified by Theorem 1.1, and, if other forms of quantum randomness exist, what qualities do they have?

To this aim we are going to look, in more detail, at the unpredictability of quantum randomness certified by Theorem 1.1. We will start by describing a non-probabilistic model of prediction – proposed in (Abbott et al. 2015b) – for a hypothetical experiment E specified effectively by an experimenter6.

The model uses the following key elements:

1 1) The specification of an experiment E for which the outcome must be predicted.

2 2) A predicting agent or “predictor”, which must predict the outcome of the experiment.

3 3) An extractor ξ is a physical device that the predictor uses to (uniformly) extract information pertinent to prediction that may be outside the scope of the experimental specification E. This could be, for example, the time, measurement of some parameter, iteration of the experiment, etc.

4 4) The uniform, algorithmic repetition of the experiment E.

In this model, a predictor is an effective (i.e. computational) method to uniformly predict the outcome of an experiment using finite information extracted (again, uniformly) from the experimental conditions along with the specification of the experiment, but independent from the results of the experiments. A predictor depends on an axiomatic, formalized theory, which allows the prediction to be made, i.e. to compute the “future”. An experiment is predictable if any potential sequence of repetitions (of unbounded, but finite, length) can always be predicted correctly by such a predictor. To avoid prediction being successful just by chance, we require that the correct predictor – which can return a prediction or abstain (prediction withheld) – never makes a wrong prediction, no matter how many times it is required to make a new prediction (by repeating the experiment) and cannot abstain from making predictions indefinitely, i.e. the number of correct predictions can be made arbitrarily large by repeating the experiment enough times.

We consider a finitely specified physical experiment E producing a single bit x ∈ {0,1}. Such an experiment could, for example, be the measurement of a photon’s polarization after it has passed through a 50:50 polarizing beam splitter, or simply the toss of a physical coin with initial conditions and experimental parameters specified finitely.

A particular trial of E is associated with the parameter λ, which fully describes the “state of the universe” in which the trial is run. This parameter is “an infinite quantity” – for example, an infinite sequence or a real number – structured in a way dependent on the intended theory. The result below, though, is independent of the theory. While λ is not in its entirety an obtainable quantity, it contains any information that may be pertinent to prediction. Any predictor can have practical access to a finite amount of this information. We can view a resource as one that can extract finite information, in order to predict the outcome of the experiment E.

An extractor is a physical device selecting a finite amount of information included in λ without altering the experiment E. It can be used by a predicting agent to examine the experiment and make predictions when the experiment is performed with parameter λ. So, the extractor produces a finite string of bits ξ (λ). For example, ξ (λ) may be an encoding of the result of the previous instantiation of E, or the time of day the experiment is performed.

A predictor for E is an algorithm (computable function) PE which halts on every input and outputs either 0, 1 (cases in which PE has made a prediction), or “prediction withheld”. We interpret the last form of output as a refrain from making a prediction. The predictor PE can utilize, as input, the information ξ (λ) selected by an extractor encoding relevant information for a particular instantiation of E, but must not disturb or interact with E in any way; that is, it must be passive.

A predictor PE provides a correct prediction using the extractor ξ for an instantiation of E with parameter λ if, when taking as input ξ (λ), it outputs 0 or 1 (i.e. it does not refrain from making a prediction) and this output is equal to x, the result of the experiment.

Let us fix an extractor ξ. The predictor PE is k-correct for ξ if there exists an n ≥ k, such that when E is repeated n times with associated parameters λ1,…, λn producing the outputs x1 x2, …, xn PE outputs the sequence (ξ (λ1)), PE (ξ (λ2)), …, PE (ξ (λn)) with the following two properties:

1 1) no prediction in the sequence is incorrect, and

2 2) in the sequence, there are k correct predictions.

The repetition of E must follow an algorithmic procedure for resetting and repeating the experiment; generally, this will consist of a succession of events, with the procedure being “prepared, performed, the result (if any) recorded and E being reset”.

The definition above captures the need to avoid correct predictions by chance by forcing more and more trials and predictions. If PE is k-correct for ξ then the probability that such a correct sequence would be produced by chance is bounded by hence, it tends to zero when k goes to infinity.

The confidence we have in a k-correct predictor increases as k approaches infinity. If PE is k-correct for ξ for all k, then PE never makes an incorrect prediction and the number of correct predictions can be made arbitrarily large by repeating E enough times. In this case, we simply say that PE is correct for ξ. The infinity used in the above definition is potential, not actual: its role is to arbitrarily guarantee many correct predictions.

This definition of correctness allows PE to refrain from predicting when it is unable to. A predictor PE which is correct for ξ is, when using the extracted information ξ (λ), guaranteed to always be capable of providing more correct predictions for E, so it will not output “prediction withheld” indefinitely. Furthermore, although PE is technically only used a finite, but arbitrarily large, number of times, the definition guarantees that, in the hypothetical scenario where it is executed infinitely many times, PE will provide infinitely many correct predictions and not a single incorrect one.

Finally, we define the prediction of a single bit produced by an individual trial of the experiment E. The outcome x of a single trial of the experiment E performed with parameter λ is predictable (with certainty) if there exist an extractor ξ and a predictor PE which is correct for ⇠ and PE (ξ (λ)) = x.

By applying the model of unpredictability described above to quantum measurement outcomes obtained by measuring a value indefinite observable, for example, obtained using Theorem 1.1, we obtain a formal certification of the unpredictability of those outcomes:

THEOREM 1.3. (Abbott et al. 2015b) – Assume the EPR and Eigenstate principles. If E is an experiment measuring a quantum value indefinite observable, then for every predictor PE using any extractor ξ PE is not correct for ξ

THEOREM 1.4. (Abbott et al. 2015b) – Assume the EPR and Eigenstate principles. In an infinite independent repetition of an experiment E measuring a quantum value indefinite observable which generates an infinite sequence of outcomes x = x1x2…, no single bit xi can be predicted with certainty.

According to Theorems 1.3 and 1.4, the outcome of measuring a value indefinite observable is “maximally unpredictable”. We can measure the degree of unpredictability using the computational power of the predictor.

In particular, we can consider weaker or stronger predictors than those used in Theorems 1.3 and 1.4, which have the power of a Turing machine (Abbott et al. 2015a). This “relativistic” understanding of unpredictability (fix the reference system and the invariant preserving transformations is the approach proposed by Einstein’s relativity theory) allows us to obtain “maximal unpredictability”, but not absolutely, only relative to a theory, no more and no less. In particular, and from this perspective, Theorem 1.3 should not be interpreted as a statement that quantum measurement outcomes are “true random”7 in any absolute sense: true randomness – in the sense that no correlations exist between successive measurement results – is mathematically impossible as we will show in section 1.5 in a “theory invariant way”, that is, for sequences of pure digits, independent of the measurements (classical, quantum, etc.) that they may have been derived from, if any. Finally, question (3) will be discussed in section 1.6.2.

Chance, Calculation and Life

Подняться наверх