Читать книгу Quantum Computing - Melanie Swan - Страница 14

Оглавление

Chapter 4

Advanced Quantum Computing: Interference and Entanglement

Abstract

The special properties of quantum objects (atoms, ions, and photons) are superposition, interference, and entanglement. Superposition refers to particles existing across all possible states simultaneously. Interference is the situation where intervention from noise in the environment damages the quantum object, and also the possibility that the wave functions of particles can either reinforce or diminish each other. Entanglement means that groups of particles are connected and can interact in ways such that the quantum state of each particle cannot be described independently of the state of the others even when the particles are separated by a large distance. One of the most important implications of entanglement is that qubits can be error-corrected, which will likely be necessary for the advent of universal quantum computing. An application of quantum computing that is already available is certifiably random bits, a proven source of randomness, which is used in secure cryptography.

4.1Introduction

One surprise is that there may be many more useful short-term applications of quantum computing with currently available NISQ devices than has been thought possible without full-blown universal quantum computers. NISQ devices are noisy intermediate-scale quantum devices (Preskill, 2018). For example, even near-term quantum computing devices may allow computations as elaborate as the simulation of quantum field theories (Jordan et al., 2012).

4.1.1 Quantum statistics

Quantum superposition, entanglement, and interference (SEI) properties come together in the discipline of quantum statistics. Quantum phenomena have a signature. They produce certain kinds of recognizable quantum statistical distributions that could only have come from quantum mechanical systems. This includes patterns from interference (through amplitudes), superposition (through qubit spins, such as in quantum annealing), and entanglement (through Bell pairs and otherwise). These quantum signatures are unique and identifiable. This is not surprising, given that quantum statistics means studying how wave functions behave in a quantum mechanical system, in a statistical format (i.e. distribution-based). The key point is that only a quantum system could have produced such output, and thus it can be used as a source of provable randomness.

As an indication of the unique signifiers of quantum phenomena, one relevant interpretation of quantum statistics is Porter–Thomas distribution. These are distributions in which the probabilities themselves are exponentially distributed random variables. The quantum statistics are known and have been developed elsewhere in physics to model quantum many-body systems. The practical application is that quantum statistical distributions can be set up to generate either predictable patterns or randomness. In particular, many applications require a guaranteed source of randomness. True randomness instills trust in believing that events have been fairly determined. Some of the immediate applications for randomness are cryptography (setting the parameters for a system that cannot be back-doored or otherwise breached), and blockchains more generally, both in cryptography and in facilitating the creation of next-generation consensus algorithms (PBFT) based on entropy. Other uses for randomness include running lotteries (picking numbers fairly) and auditing election results (selecting precincts to review at random).

At present, most randomness is not guaranteed to be random, and a potential trend would be the widespread use of quantum computers to generate guaranteed randomness for use in various security applications.

4.2Interference

In physics, wave interference is a phenomenon in which two waves in a system have either a reinforcing or canceling effect upon one another. There can be positive coherence if the two waves are in the same phase, reinforcing each other in a stronger way, as in the ocean when multiple big waves come into shore at once. Alternatively, there can be negative coherence if waves are in phases that counterpose or cancel each other out, such as when there is noise from the environment.

Interference is used in building quantum circuits and calculating with vectors in quantum computing. A quantum circuit harnesses the qubit wave action with matrix multiplications (linear algebra). Each time a vector (corresponding to qubit position) is multiplied by a matrix (the computational movement through the quantum gate system), the matrix combines numbers in the vector, and the combination either reinforces the numbers or cancels them out. In this real physical sense, coherent wave behavior is calculated as a vector passing through quantum gates. This is an important factor that is competing against the fact that coherent wave behavior is the environmental noise of the system. The coherent action of the waves is fragile, and can be easily destroyed if the system has too much noise or other interference.

This is a challenge in quantum computing because irrespective of the qubit-generation method (superconducting circuits, trapped ions, topological matter, etc.), there is always going to be noise in the system, and if the noise overwhelms the coherent wave activity, the quantum computer is not going to work. Hence, quantum error correction becomes important for mitigating the noise.

4.2.1 Interference and amplitude

The wave behavior of qubits and interference is seen in modeling coherent wave action through quantum gates (protecting against noise in quantum circuit design), and also in another property of the quantum mechanical domain, amplitude. Whereas probabilities are assigned to the different possible states of the world in classical systems, amplitudes are the analog in quantum systems. Amplitudes are more complicated than probabilities, in that they can interfere destructively and cancel each other out, be complex numbers, and not sum to one. A quantum computer is a device that maintains a state that is a superposition of every configuration of qubits measured in amplitude. For practical computation, the amplitudes are converted into probabilities (probability is the squared absolute value of its amplitude). A key challenge is figuring out how to obtain a quantum speed advantage by exploiting amplitudes. This is not as straightforward as using the superposition property of qubits to model a greater number of possibilities, since simply measuring random configurations will not coalesce into problem-solving answers. Hence, quantum statistical models are implicated to exploit amplitudes such that certain patterns of interference are produced.

To produce the kinds of interference patterns that might be directed into problem answers, one strategy is engaging the properties of wave coherence. In a quantum circuit, each of the amplitudes of the possible output states is the sum of exponentially many possible contributions. These contributions are complex numbers pointing in every direction in the complex plane, and the final amplitude is whatever residue is left over after the complex numbers have mostly collapsed and canceled each other out in the ending state. The idea is to incorporate this model of amplitudes into a quantum-solvable process.

An analogy in the everyday world can be made with light. A beam from a laser pointer could be shone through a field of ground-up glass to see where the light ends up on a screen at the end of the field. This produces a speckling pattern, in that as the beam goes through the field, there are darker points where there is destructive interference, and lighter points where there is constructive interference. Running many samples firmly establishes the pattern of where the individual photon lands. The consistency of the speckle pattern can then be analyzed to see if the photon preferentially lands on the lighter points of the constructive interference or the darker points of the destructive interference.

There are two possible ways the amplitude interference patterns can be used, to produce a reliably repeatable pattern or to produce a random pattern. The first idea is to generate a predictable pattern, which implies that this particular interference system can be used to encode a real-world problem, such that a useful answer can be interpreted. Conceptually, this is more or less how quantum annealing operates, although qubit spins, not interference, is the mechanism. The same principle is at work in encoding a real-world problem into a quantum-solvable process where the quantum system runs and provides an answer. The other possibility is that a predictable pattern is not the outcome, that the result is random, which is helpful in another way. Gaussian output is an indication of randomness, of a well-formed statistically sound mechanism for generating randomness. Overall, the implication of quantum statistics is that quantum randomness can be produced, even in NISQ devices.

4.3Noisy Intermediate-Scale Quantum Devices

The long-term goal of quantum computing is to realize universal quantum computation on fault-tolerant quantum information processors. In the shorter term, the objective is to solve problems with NISQ devices, which are quantum processors that are not error-corrected. Quantum computing is developing in different steps based on available technical functionality. The first phase of quantum computing (2001–2012) consisted of several demonstrations of 1–2 qubits and up to 10-qubit systems using a variety of hardware approaches. The second phase of quantum computing is currently underway (2012–2019) and includes general-purpose 30–70 qubit systems with gate model superconducting circuits, and special-purpose 2048-qubit systems with quantum annealing machine superconducting circuits. The landmark discovery in 2012 of high-temperature superconductors helped to propel the development of general-purpose gate model logic in superconducting circuits, in which operations can be controlled. Superconducting circuits, whether based in standard gate models or quantum annealing machines, are the only hardware approaches that are commercially available as of June 2019.

Existing quantum computers are NISQ devices, meaning imperfect, modestly sized quantum computing machines (Preskill, 2018). NISQ devices are an important advance over the few-qubit systems that largely served as a proof of concept. The challenge with NISQ devices is finding relevant problems that can be solved with only 50–100 qubits without error correction. In the longer-term, quantum computers are foreseen to have the most significant advantage over classical computers in solving computational problems that are known to be difficult. These include Shor’s factoring algorithm (which could likely break the current RSA cryptography standard) and Grover’s search algorithm (for faster search through large datasets). Nevertheless, in the shorter term, even with NISQ devices, it may be possible to make significant progress in the areas of optimization, simulation, machine learning, and cryptography. One application with significant economic promise for quantum computing is simulating physical processes in chemistry, physics, and biology to discover new materials and pharmaceutical substances.

Although error correction is not a substantial issue for NISQ devices, it could start to become a problem in more sophisticated quantum computing systems given the sensitivity of qubits to environmental noise. Predictions as to the number of qubits for which error correction will be required vary considerably. One estimate is presented in Table 4.1 (McMahon, 2018), as a strawman sketch of the different kinds of applications and the number of qubits needed. Many factors could play a role in error correction, including the hardware approach, chip architecture, and quantum algorithm design. Due to the short-term unavailability of quantum error-correction schemes, clever workarounds have been developed. These include using NISQ devices for quantum simulation using error-resilient algorithms (Colless et al., 2018), hardware that is resistant to errors (Kandala et al., 2017), and other error mitigation techniques (Kandala et al., 2019).

One step in the quantum computing roadmap is demonstrating quantum advantage (a clear advantage of using quantum computing versus classical computing). Substantiating quantum advantage is a largely academic hurdle that is expected to be achievable with the NISQ devices that are currently available with 30–70 qubits. After quantum advantage, the next class of quantum computing applications is optimization and simulation in domains ranging from chemistry, physics, and biology to economics and finance. Further applications involve quantum machine learning and quantum cryptography. Quantum machine learning refers to both the application of machine learning techniques to the quantum domain and using quantum mechanical systems to extend research in machine learning. Quantum cryptography contemplates an extensive suite of applications such as quantum key distribution, cryptographic algorithms, and quantum-secure zero-knowledge proofs.

Table 4.1. Quantum applications and number of qubits required.

ApplicationEstimated # qubits required
Quantum advantage50
Quantum optimization and sampling~100
Quantum simulation>100
Chemistry simulation/nitrogen fixationFew hundred
Applications requiring error correction>1 million
Shor’s algorithm (factoring)>50 million
Grover’s algorithm (search)>100 million

4.3.1 Computability and computational complexity

The advent of quantum computing calls into sharper relief theories for analyzing the kinds of problems that are computable. Computability relates to the theory of computation, and investigates how efficiently problems can be solved based on different kinds of algorithms and models of computation. Quantum computing might extend the range of the kinds of problems that can be computed efficiently, but would not allow the computability of all problems. Quantum computing could provide an incremental yet crucial extension to the kinds of real-world problems that might be solvable.

In computational complexity, two parameters are studied, the required computation resources in terms of time and space that are necessary to calculate the problem. The theoretical basis for computational complexity is the Church–Turing computability thesis (Table 4.2). The original Church–Turing thesis is concerned only with the theoretical status of whether a given problem is computable at all, irrespective of the time required. The extended Church–Turing thesis also incorporates the practical consideration of whether a problem is efficiently computable in time. A given problem might fall within any of various tiers of time complexity and space complexity in the hierarchy of computational complexity classes.

As an extremely broad heuristic, quantum computers may allow a one-tier increase in computing according to the computational complexity schema. For example, a problem that requires exponential time in classical systems (i.e. time that is too long for practical results) may take polynomial time in quantum systems (i.e. a reasonable amount of time for practical use). In the canonical Traveling Salesman Problem, it may be possible to check twice as many cities in half the time using a quantum computer.

Table 4.2. Church–Turing computability thesis.

Church–Turing thesisProblem answered
Church–Turing (original)Is the problem computable? (ignoring time)
Church–Turing (extended)Is the problem efficiently computable in time?

4.4Quantum Error Correction

4.4.1 Practical concerns and status

One of the biggest challenges to the potential instantiation of universal quantum computers is quantum error correction. Qubits are more fragile than classical bits and need to be error-corrected if they become damaged. It is too early in the development of quantum computing to estimate exactly which hardware approaches will require error correction and at what point (with certain numbers of qubits). However, since qubits are affected by both state decay and environmental noise, some kind of error correction is likely to be required. Methods for error correction are largely a research effort at present, and more clarity may arrive with implementation.

The two kinds of commercially available systems (as of June 2019) use superconducting circuits, but in different architectures. There is the standard gate model (with 30–70 qubits) and the quantum annealing model (with 2048 qubits). The trade-off is that the gate model aims to be more of a universal quantum computer, whereas the quantum annealing model can only accommodate certain classes of optimization problems. On the one hand, annealing machines harness the natural evolution of quantum systems over time to settle into the lowest-energy state (a minimum). The intermediate operation of the quantum process cannot be controlled. On the other hand, gate model machines seek to control and manipulate the evolution of the quantum system at each moment in time, which suggests that they can be used to solve a much larger and more general set of problems. For error correction, the benefit is that with less manipulation, less errors arise, and the annealing machine does not require the same kinds of error correction that the gate model machine does, at least at the current number of qubits.

Table 4.3. Quantum computing systems and error correction.


Other approaches to quantum information systems claim that error correction is not an immediate concern (Table 4.3), although the path to scaling up the number of qubits is perhaps less clear than with superconducting circuits. These other methods, although still in the research phase, are designed such that error-correction requirements may be greatly reduced. These systems include ion trapping, fermion braiding, and photonic quantum computing. In particular, the fermion braiding approach is robust to noise since changes in geometry do not have an effect, only changes in topology, and the errors can be corrected in hardware instead of software.

4.4.2 Quantum state decoherence

Quantum information has different properties than classical information and is much more sensitive to being damaged by the computing environment. Whereas in a classical system, the idea is to pack in as many bits as possible, in a quantum system, the aim is to have only as many high-fidelity qubits as can be effectively controlled, and scale up with that level of integrity. It is difficult to isolate systems from the environment well enough for them to have useful quantum behavior. The quantum states can decohere (decay) quickly and become damaged by the noisy (imperfect) environment. It is likely that some kinds of error correction will always be necessary in quantum systems. Even if a perfect computing environment without any noise were to be obtained, there is still the natural property of qubits to decay that must be addressed. The excited state of qubits ultimately decays to the ground state due to being coupled to the vacuum fluctuation (the vacuum fluctuations of electromagnetic fields are unavoidable). Therefore, it is necessary to encode the quantum states to be protected in such a way that they can be error-corrected to remain robust. These requirements introduce the notions of qubit lifecycle and qubit management in quantum information systems.

4.4.3 Entanglement property of qubits

Unlike classical information, which can be examined arbitrarily many times to determine if it has changed, quantum information cannot be measured directly because it will change or destroy the information. However, quantum information has the interesting property of entanglement. The entanglement property refers to quantum particles being entangled with one another. Quantum particles are not isolated and discrete, but rather correlated, both with each other, and the history of the system. Particles and previous states are correlated as part of the fuller information landscape of the quantum system. Hence, quantum error correction is performed by taking advantage of the entanglement property of qubits. The insight is that due to entanglement, it is possible to measure relationships between qubits without measuring the values stored by the qubits themselves.

4.4.3.1Error-correction codes

Quantum error correction uses the entanglement property to smear out the information of 1 qubit onto 9 entangled qubits (in the basic case). The idea is that the information of 1 qubit can be stored on a highly entangled state of several qubits. A local point, 1 qubit, can be smeared out, and represented with a larger number of qubits. The auxiliary qubits are called an ancilla (ancillary qubits). Quantum error correction is typically performed by introducing an ancilla in the form of additional qubits with which the qubit of interest is entangled. Entangling the qubit of interest with ancillary qubits allows the qubit to be protected by smearing out its information over a larger system. The qubit of interest can be error-checked since its information can be examined indirectly through the entangled qubits (through parity measurements). In this way, quantum error correction is used to test the integrity of quantum information without damaging it. Quantum error correction makes quantum computing feasible, in that a quantum computer can tolerate some degree of noise in the environment by correcting errors. Many different kinds of quantum error-correction codes (encoding schemes) have been proposed. Shor’s code uses a 9-qubit smear, and others require fewer qubits (a 7-qubit code and a 5-qubit code, for example) (Shor, 1995).

4.4.3.2Classical error correction

In general, all forms of computing systems use error correction to check for data integrity. An error-correcting process seeks to determine whether information has been damaged or destroyed and restores its initial state. Error correction is well-understood in classical logic. For example, there could be a memory chip storing bits that is hit by a cosmic ray. If one bit in a 32-bit word is flipped, there are many known ways of recovery. One frequently used method is having many copies of the data. With redundancy, having several copies of the information means that a mechanistic majority-voting mechanism can be used to confirm the intact version of the data. With quantum logic, however, error correction based on making redundant copies of the same information cannot work. It is not possible to copy quantum information due to the no-cloning theorem, which states that it is impossible to create an identical copy of an arbitrary unknown quantum state. Therefore, quantum error-correction methods such as those based on entanglement are needed.

4.4.3.3Shor’s code

It is not by accident that Shor’s code, the first quantum error-correction code discovered, is 9 qubits. Nine qubits is the smallest ancilla that can be used to confirm that the original qubit was not flipped (changed or damaged), by checking various pairwise sequences of possible flipping along the X, Y, and Z axes of the qubit. A simple error-correcting code could instantiate a single logical qubit of data as three physical qubits for each scenario of the three axes. With pair-wise evaluation, it is possible to determine whether the first and second qubits have the same value, and whether the second and third qubits have the same value, without determining what that value is. If one of the qubits turns out to disagree with the other two, it can be reset to their value. Further, the pair-wise evaluations might be performed in both time and space suggesting quantum information processing architectures with time speed-ups.

Shor’s code is a sequential method using single Pauli operators (3D operators with X, Y, and Z values) to act on the system according to the different possible error permutations that could have occurred. Since the ancillary qubits and the original qubit are entangled, any error will have a recognizable signature and can be corrected (by repairing it into the initial phase or into an irrelevant phase that does not impact the original qubit’s information). Since the states of the system are eigenstates of eigenvalue one for all of the operators, the measurement does nothing to the overall state. The Shor code is redundant, in that the number of bits of information it protects is significantly fewer than the number of physical bits that are present. Noise can come in from the environment without disrupting the message. Also, the Shor code is nonlocal in the sense that the qubit information is carried in the entanglement between the multiple qubits that protect it against local decoherence and depolarization.

The Pauli operator that uses X–Y–Z quantum spin representations is one proposed method. Kraus operators, which are operator-sum representations, are another (Verlinde & Verlinde, 2013). However, Kraus operators can be difficult to engage because they require details of the interaction with the environment, which may be unknown. Single Pauli operators suggest a more straightforward implementation model.

4.4.4 Quantum information processors

The main concept of error correction is that to protect qubits from environmental noise and to mitigate against state decay, the qubit of interest can be encoded in a larger number of ancillary qubits through entanglement. The entangled qubits are combined into a bigger overall fabric of qubits that constitutes the quantum information processor. For example, a quantum information processor might have 50 qubits, and an error-correction requirement of 9 qubits for each qubit. This would only leave 5 qubits available for information processing. The scaling challenge is clear, in that with current error-correction methods, most of the processing capacity must be devoted to error correction. The implied scaling rule is 10, in the sense that any size quantum information processor only has available one-tenth of the total qubits for the actual information processing (each 1 qubit requires 9 qubits of error correction). More efficient error-correcting codes have been proposed, but the scaling rule of 10 could be a general heuristic, at least initially.

Quantum Computing

Подняться наверх