Читать книгу Quantum Computing - Melanie Swan - Страница 10

Оглавление

Chapter 1

Introduction

Quantum computing … has established an unprecedentedly deep link between the foundations of computer science and the foundations of physics

— John Preskill (2000, p. 127)

The implication is that reality can be computed. The startling premise of quantum computing is that the bizarre quantum mechanical realm that was thought incomprehensible can be computed. Quantum mechanics might remain incomprehensible, but at least it can be computed. The bigger notion is that the link between quantum computing and quantum physics suggests the possibility of delivering on the promise of understanding the true nature of physical reality.

Merely the first step is simulating the quantum mechanical world with quantum computers in its image as Feynman envisioned. Beyond that milestone, the real endpoint is mobilizing the quantum mechanical domain to compute more difficult problems and create new possibilities.

Engaging the quantum realm makes it clear that tools are needed to comprehend the two domains together, the microscale of the quantum and the macroscale of lived reality. The issue concerns not only understanding more about quantum mechanics, but also linking the quantum and non-quantum domains such that the quantum realm can be activated in a useful way at the macroscale. This book is a journey towards a model to do precisely this, incorporating the superposition, entanglement, and interference properties of quantum mechanics with the 3D time and space geometries of the macroscale world.

Figure 1.1. Model of computational reality.

With the assumption that physical reality can be computed with quantum information systems, the theme of computation is taken up through the lens of smart network technologies (self-operating computation networks) and their potential expansion into the quantum mechanical domain. A technophysics approach (the application of physics principles to the study of technology) is used to derive smart network field theory (conventional and quantum) as a physical theory of smart network technologies. The organizing principle of this work is encapsulated in a causal model (Figure 1.1).

The model of computational reality posits that computation capability influences the ability to have knowledge about physical reality. Computation capability may be moderated by variables such as the properties of computational systems, the theories and algorithms that manage the computation process, and security features that protect and allow system criticality to be managed, essentially the tools and apparatus by which the capacity for efficient computation is deployed towards the end of understanding reality. The main hypothesis is that computation is the biggest current factor in acquiring knowledge about physical reality at all scales, from the observable universe to the Planck scale.

1.1Quantum Futures?

Some speculative yet reasonable picture of the near future could unfold as follows. Quantum computing is farther along than may be realized, but is still a very early-stage technology with considerable uncertainty about its possible progression. However, given the fact that early-stage quantum computers have been demonstrated and are shipping, and that perhaps a hundred or more worldwide labs are engaged in high-profile, well-funded, competitive research efforts, it seems possible that better quantum computers may become available in the next few decades. The path is clear but non-trivial, the task is scaling up the number of qubits from 30–70 to hundreds, thousands, and millions, and testing various quantum error correction schemes for the end goal of constructing a universal fault-tolerant quantum computer.

Many kinds of quantum computing hardware are being demonstrated, and it is possible that optical quantum computing could accelerate in the same vein as optical global communications did previously, this time for the quantum internet with satellite-based quantum key distribution, secure end-to-end communications, quantum routers and modems, and distributed quantum computing. Smart network technologies such as blockchain and deep learning are sponsoring the advance to quantum computing with post-quantum cryptography and quantum machine learning optimization and simulation, and other smart network technologies could follow suit, such as autonomous vehicle networks, robotic swarms, automated supply chains, and industrial robotics cloudminds. Quantum computing is not just nice, but necessary to keep pace with growing data volumes, and to enable a new slate of applications such as whole brain emulation, atomically-precise manufacturing, and causal models for disease.

1.2Technophysics

Technophysics is the application of physics principles to the study of technology, by analogy to Biophysics and Econophysics. Biophysics is an interdisciplinary science that uses the approaches and methods of physics to study biological systems with the goal of understanding the structure, dynamics, interactions, and functions of biological systems. Likewise, Econophysics is an interdisciplinary field of research that applies theories and methods developed in physics to solve problems in economics and finance, particularly those including uncertainty, stochastic processes, and nonlinear dynamics. Considering Technophysics, although much of technology development is initially physics-based, a wider range of concepts and methods from ongoing advances in physics could be applied to the study and development of new technology. In this work, the techno-physics approach is directed at the study of smart network technologies (intelligent self-operating networks such as blockchains and deep learning neural networks), by engaging statistical physics and information theory for the characterization, control, criticality assessment, and novelty catalysis of smart network systems.

Technophysics arises from a confluence of factors in contemporary science. First, there have been several non-trivial connections in the last few decades linking the physical world and the representational world of mathematics, algorithms, and information theory. One example is that statistical physics has made a link between spin glasses in condensed matter physics and combinatorial optimization problems. Specifically, an association between the glass transition in condensed matter physics, error-correcting codes in information theory, and probability theory in computer science has been established with statistical physics (Mezard & Montanari, 2009). Another example is the symbiotic relationship of using machine learning algorithms to understand physical systems, and using analogies with physical systems and materials to understand the operation of algorithms (Krzakala et al., 2007). A third example is the holographic principle (Susskind, 1995), formalized in the AdS/CFT correspondence (Maldacena, 1998), and its link to information theory (Harlow & Hayden, 2013), quantum error correction codes (Pastawski et al., 2015), and tensor networks (Witten, 2016). The holographic principle suggests that in any physical system, there is a correspondence between a volume of space and its boundary region. The implication is that the interior bulk can be described by a boundary theory in one fewer dimensions, in an information compression mechanism between the 3D bulk and the 2D boundary.

A second factor giving rise to technophysics is a cross-pollination in academic methods. Various approaches have become part of the cannon of scientific methods irrespective of field. These methods include complexity science, network theory, information science, and computation graphs. Many fields now have a computational information science counterpart, both in the hard sciences (such as computational neuroscience, computational chemistry, and computational astrophysics), and in liberal arts (seen in the digital humanities, semantic data structure investigation, and text mining and analysis). The opposite trend is also true, developing a physics-based counterpart to fields of study, such as in Biophysics, Econophysics, and Technophysics. The exchange between academic methods is leading to the development of a more comprehensive and robust apparatus of universal scientific study across all areas of the academe.

1.2.1 Conceptual toolkit of ideas

The premise of this text is that it is necessary to be facile with technophysics ideas to be effective in today’s world. Hence, a compendium of knowledge modules to enable such facility is presented in the course of this book. Having a grasp of concepts and methods from a broader range of fields that extend beyond an initial field of training, especially from contemporary research frontiers and interstices, could increase the capacity for innovation and impact in the world. This book provides a strategy for managing the considerable uncertainty of the disruptive possibilities of quantum computing in the next decades and beyond with smart network field theories as a tool. Methods are integrated from statistical and theoretical physics, information theory, and computer science. The techno-physics approach is rooted in theory and application, and presented at the levels of both defining a conceptual understanding as well as outlining how formalisms and analysis techniques may be used in practice.

It is possible to elaborate some of the standardized ideas that comprise the canon of technophysics knowledge. These include eigenvalues and eigenvectors, and the notion of operators that measure the multidimensional configuration of a system and that can act on the system. The Hamiltonian is the operator par excellence that measures the total energy in a system, and allows problems to be written in the form of an energy minimization problem. Complementarity (only one property of a quantum system can be measured at a time), time dilation (the system looks different to different viewers), and geometry-based metrics are important. There is a notion of selectable parameters, among different geometries, coordinate systems, and space and time domains.

In some sense, the terms network, graph, tensor, matrix, operator, and 3D all point at the same kind of concept, and while not technically synonyms, mutually imply each other as properties of a computable system. A computation graph is a network, a network implies a computation graph or matrix, and all are 3D or of higher-order dimensionality. Further, an operator is a matrix, a field is a matrix, and tensor networks and random tensor networks can be used to rewrite high-dimensional entangled problem domains to be analytically computable. Dimensionality portability, rescaling, and reduction are frequently-used techniques. Answers are likely to be given in probabilistic terms. The bulk–boundary correspondence is a conceptual structure in which a system can be written as two regions, a surface in one fewer dimensions that can be used to interrogate the bulk region as a more complex and possibly unknowable domain.

A key technophysics principle is reducing or rendering a physical system into a form that is computable and then analytically solving it. Computational complexity is a constant focus. Classical problems can be made quantum-computable by writing them with SEI properties (superposition, entanglement, and interference), meaning converting the problem to a structure that engages the quantum properties of superposition, entanglement, and interference. As a gross heuristic, quantum computers may allow a one-tier increase in the computational complexity schema of problem calculation. Any problem that takes exponential time in classical systems (i.e. too long) may take polynomial time in quantum systems (i.e. a reasonable amount of time for practical use). In the canonical Traveling Salesman Problem, perhaps twice as many cities could be checked in half the time using a quantum computer.

1.2.2 New slate of all-purpose smart technology features

Smart network technologies are creating a variety of new technology developments as standard features to solve important problems, and that have greater extensibility beyond their initial purpose. One example is consensus algorithms, which provide cryptographic security of block-chains through the mining operation, and more generally comprise a standard technology (ConsensusTech) that could be used for the self-coordinated agreement and governance of any multi-agent system (human or machine). Other examples of generic technologies include zero-knowledge proofs, error correction, and hash functions, each of which, when generalized, conveys a new concept in technology.

Zero-knowledge proofs are computational proofs that are a mechanistic set of algorithms that could be easily incorporated as a feature in many technology systems to provide privacy and validation. Zero-knowledge proofs are proofs that reveal no information except the correctness of the statement (data verification is separated from the data itself, conveying zero knowledge about the underlying data, thereby keeping it private). The proofs are used first and foremost to prove validity, for example, that someone is who they claim to be. Proofs are also an information compression technique. Some amount of activity is conducted and the abstracted output is all that is necessary as the outcome (the proof evaluates to a one-bit True/False answer or some other short output). The main concept of a proof is that some underlying work is performed and a validated short answer is produced as the result.

Quantum error correction is necessary to repair quantum information bits (qubits) that become damaged, by adhering to quantum properties such as the no-cloning theorem (quantum information cannot be copied) and the no-measurement rule (quantum information cannot be measured without damaging it). Consequently, quantum error correction involves relying upon entanglement among qubits to smear out the original qubit’s information onto entangled qubits which can be used to error-correct (restore) the initial qubit. The proximate use is quantum error correction. However, the great benefit is that a structural feature is created in the form of the error correction apparatus that can be more widely deployed. An error correction-type architecture can be used for novel purposes. One such project deploys the error correction feature to control local qubit interactions in an otherwise undirectable quantum annealing solver (Lechner et al., 2015). The overall concept is system manipulation and control through quantum error correction-type models.

Hash functions are another example of a general-purpose smart network technology whose underlying mechanism is not new, but is finding a wider range of uses. Like proofs, hash functions are a form of information compression technology. A hash function is an algorithm that can be run over any arbitrarily large-sized digital data file (e.g. a genome, movie, software codebase, or 250-page legal contract) which results in a fixed-length code, often 32 bytes (64 alphanumeric characters). Hash functions have many current uses including password protection and the sending of secure messages across the internet such that a receiving party with the hashing algorithm and a key can decrypt the message. Hash functions are also finding novel uses. One is that since internet content can be specified with a URL, the URL can be called with a hash format by other programs (the hash function standardizes the length of the input of an arbitrarily-long URL). This concept is seen in Web 3.0 as hash-linked data structures.

The key point is the development of generic feature sets in smart network technologies that can be translated to other uses. This is not a surprise, since a property of new technology is that its full range of applications cannot be envisioned at the outset, and evolves through use. The automobile was initially conceived as a horseless carriage. What is noticeable is the theme that these features are all forms of information compression and expansion techniques (proofs and hash functions compress information and error correction expands information). This too is not surprising, given that these features are applied to information theoretic domains in which a key question is the extent to which any signal is compressible (and more generally signal-to-noise ratios and the total possible system configurations (entropy)). However, proofs and hash functions are different from traditional information compression techniques since they can convert an arbitrarily-large input to a fixed output, which connotes the attributes of a flexible and dynamical real-time system. This book (especially Chapter 15) extends these insights to interpret the emerging features (proofs, error correction, and hash functions) and quantum smart networks more generally, in a dimensional model (the bulk–boundary correspondence). In the bulk–boundary correspondence, the compression or expansion activity is performed in a higher-dimensional region (the bulk), and then translated such that the result appears in one fewer dimensions in another region (the boundary).

1.3Chapter Highlights

This book aims to provide insight as to how quantum computing and quantum information science as a possible coming paradigm shift in computing may influence other high-impact digital transformation technologies such as blockchain and machine learning. A theoretical connection between physics and information technology is established. Smart network theory is proposed as a physical theory of network technologies that is extensible to their potential progression to quantum mechanical information systems. The purpose is to elaborate a physical basis for technology theories that is easily-deployable in the design, operation, and catalytic emergence of next-generation smart network systems. This work proposes the theoretical construct of smart network theories, specifically a smart network field theory (SNFT) and a smart network quantum field theory (SNQFT), as a foundational basis for the further development of smart network systems, and particularly quantum smart networks (smart network systems instantiated in quantum information processing environments).

There are pressing reasons for the development of smart network theories as macro-level system control theories since many smart network technologies are effectively a black box whose operations are either unknown from the outset (deep learning networks), or becoming hidden through confidential transactions (blockchain-based economic networks). Such smart networks are complex systems whose possibility for system criticality and nonlinear phase transition is unknown and possibly of high magnitude.

Towards this end, Part 1 introduces smart networks and quantum computing. Chapter 2 defines smart networks and smart network theory, and develops the smart network field theory in the classical and quantum domains. Chapter 3 provides an overview of quantum computing including the basic concepts (such as bit and qubit) and a detailed review of the different quantum hardware approaches and superconducting materials. A topic of paramount concern, when it might be possible to break existing cryptography standards with quantum computing, is addressed (estimated unlikely within 10 years, however methods are constantly improving). Chapter 4 considers advanced topics in quantum computing such as interference, entanglement, error correction, and certifiably random bits as produced by the NIST Randomness Beacon.

Part 2 provides a detailed consideration of blockchains and zero-knowledge proofs. Chapter 5 elaborates a comprehensive range of current improvements underway in classical blockchains. Chapter 6 discusses the quantum internet, quantum key distribution, the risks to blockchains, and proposals for instantiating blockchain protocols in a quantum format. Chapter 7 consists of a deep-dive into zero-knowledge proof technology and its current status and methods. Chapter 8 elaborates post-quantum cryptography and quantum proofs.

Part 3 focuses on machine learning and artificial intelligence. Chapter 9 discusses advances in classical machine learning such as adversarial learning and dark knowledge (also an information compression technique), and Chapter 10 articulates the status of quantum machine learning. The first kinds of applications being implemented in quantum computers are machine learning-related since both machine learning and quantum computation methods are applied in trying to solve the same kinds of optimization and statistical data analysis problems.

Part 4 develops the smart network field theory on the basis of statistical physics, information theory, and model field theory systems. Chapter 11 elaborates two model field theories, statistical neural field theory and spin glass theory. Chapter 12 develops the smart network field theory in detail with system elements, operation, and criticality detection measures, and considers applications in blockchain and deep learning smart network systems.

Part 5 extends the smart network field theory into the quantum realm with a model called the AdS/CFT correspondence (also known as gauge/gravity duality and the bulk–boundary correspondence) and holographic codes. Chapter 13 describes the holographic principle and its formalization in the AdS/CFT correspondence, and work connecting physical theory to information theory through the correspondence. Chapter 14 discusses the quantitative mobilization of the AdS/CFT correspondence into holographic quantum error-correcting codes.

Part 6 posits quantum smart networks as the quantum instantiation of smart networks and elaborates the smart network quantum field theory. In Chapter 15, a number of speculative conjectures are presented as to how the smart network quantum field theory may be engaged in a holographic format based on the bulk–boundary correspondence. The risks, limitations, and farther consequences of the work are discussed, proposing the possibility of multiple future eras of quantum computing.

References

Harlow, D. & Hayden, P. (2013). Quantum computation vs. firewalls. J. High Energ. Phys. 2013, 85.

Krzakala, F., Montanari, A., Ricci-Tersenghi, F. et al. (2007). Gibbs states and the set of solutions of random constraint satisfaction problems. PNAS. 104(25): 10318–323.

Lechner, W., Hauke1, P. & Zoller, P. (2015). A quantum annealing architecture with all-to-all connectivity from local interactions. Sci. Adv. 1(9)e1500838.

Maldacena, J. (1998). The large N limit of superconformal field theories and supergravity. Adv. Theor. Math. Phys. 2:231–52.

Mezard, M. & Montanari, A. (2009). Information, Physics, and Computation. Oxford UK: Oxford University Press, pp. 93–168.

Pastawski, F., Yoshida, B., Harlow, D. & Preskill, J. (2015). Holographic quantum error-correcting codes: Toy models for the bulk–boundary correspondence. J. High Energ. Phys. 6(149):1–53.

Preskill, J. (2000). Quantum information and physics: Some future directions. J. Modern Opt. 47(2/3):127–37.

Susskind, L. (1995). The world as a hologram. J. Math. Phys. 36(11):6377–96.

Witten, E. (2016). An SYK-like model without disorder. arXiv:1610.09758 [hep-th].

Quantum Computing

Подняться наверх