Читать книгу Quantum Computing - Melanie Swan - Страница 12

Оглавление

Chapter 2

Smart Networks: Classical and Quantum Field Theory

Abstract

This work aims to establish a deeper connection between physics and information technology. Smart network theory is proposed as a physical theory of network technologies, particularly to encompass a potential expansion into the quantum computing domain. The objective is to elaborate a physical basis for technology theories that is easily deployable in the design, operation, and catalytic emergence of next-generation smart network systems. The general notion of smart network theory as a physical basis for smart network technologies is developed into a smart network field theory (SNFT) and a smart network quantum field theory (SNQFT) relevant to the two scale domains. The intuition is that the way to orchestrate many-particle systems from a characterization, control, criticality, and novelty emergence standpoint is through an SNFT and an SNQFT. Such theories should be able to make relevant predictions about smart network systems.

2.1Smart Networks

Smart networks are intelligent autonomous networks, an emerging form of global computational infrastructure, in which decision-making and self-operating capabilities are built directly into the software. Examples of smart networks include blockchain economic networks, deep learning pattern recognition networks, unmanned aerial vehicles, real-time bidding (RTB) for advertising, and high-frequency trading (HFT) networks. More formally, smart networks are state machines that make probabilistic guesses about reality states of the world and act automatically based on these guesses. Communications networks are becoming computational networks in the sense of running executable code. Smart networks are a contemporary feature of reality with possibly thousands to billions of constituent elements, and thus entail a theoretically-robust model for their design and operation. Using statistical physics (statistical neural field theory and spin-glass models) and information theory (the anti-de Sitter space/conformal field theory, AdS/CFT, correspondence), this work proposes SNFTs for the orchestration of the fleet-many items in smart network systems.

2.2Smart Network Theory

2.2.1 Conventional (SNFT) and (SNQFT)

There is an urgent motivation for the development of smart network theories. Smart network technologies are being demonstrated as domains of complexity (exhibiting the behavior of complex systems), which are challenging to understand and manage. Smart networks are like quantum many-body systems in which interactions become too complex to model directly. Many smart network technologies are effectively a black box whose operations are either unknown from the outset (deep learning networks), or becoming hidden through zero-knowledge proofs (block-chain economic networks). Simultaneously, smart network technologies are experiencing rapid worldwide adoption, becoming unwieldy in scale, and possibly migrating to quantum computers.

The research intuition is that a technophysics approach (the application of physics principles to the technology domain) is warranted for the further development of smart network technologies. The smart network theories proposed in this work, the smart network field theory (SNFT) and the smart network quantum field theory (SNQFT), are designed to provide an integrated theoretical basis for smart network technologies that are rooted in physical foundations. The smart network theories can be used to orchestrate many-particle systems from a characterization, control, criticality, and novelty emergence perspective.

2.2.2 Smart network technologies are quantum-ready

Smart network technologies are already instantiated in 3D formats (computation graphs which imply programmability and analytic solvability), which makes them somewhat quantum-ready for the next steps towards deployment in quantum computers. The 3D format suggests the possibility of instantiation in quantum information systems with superposition and multi-dimensional spaces. One next step for implementation in quantum computers would be writing smart network technologies in the form of quantum-computable properties, namely superposition, entanglement, interference (SEI properties are the quantum computable properties).

Smart network technologies are quantum-ready, meaning 3D, as a by-product of their being instantiated in a computational graph format which is 3D. This is part of a more general shift to 3D in both the computational domain and the end user domain. Technological interfaces are accommodating more 3D interactions with reality (which is itself 3D). Emblematic of 3D interfaces is that point cloud data is a new kind of internet traffic. Point cloud data captures 3D positioning information about entities (humans, robots, objects) in the context of their surroundings, with simultaneous localization and mapping (SLAM) technology. Another example of 3D-type interfaces is an advance in machine learning called geometric deep learning in which information can be analyzed in its native form rather than being compressed into lower-dimensional representations for processing. The point for quantum-readiness is that many smart network technologies such as blockchain and deep learning, as well as many other contemporary analytical systems, are already instantiated in computation graphs which are by definition 3D (and really n-dimensional), which could facilitate their potential transition to quantum information systems.

2.3Two Eras of Network Computing

The notion of smart networks is configured in the conceptualization of there being two fundamental eras of network computing (Figure 2.1). Most of the progress to date, from mainframes to mobile computing, has concerned the transfer of basic information on simple networks. Now, however, in a second phase of network computing, a new paradigm is being inaugurated, smart networks (Swan, 2015).

Figure 2.1. Eras of network computing: Simple networks and smart networks.

Table 2.1. Early smart networks (Smart Networks 1.0).

Internet of trading Internet of advertising
High-frequency trading Real-time bidding
MarketTech AdTech
Internet of energy Internet of things
Smart grids (power) Smart city
EnergyTech SensorTech

2.3.1 Smart Networks 1.0

There are different developmental phases of smart networks. An early class of smart networks (Smart Networks 1.0) can be identified (Table 2.1). Smart Networks 1.0 include HFT networks, the RTB market for advertising, smart energy grids with automated load-rebalancing, and smart city Internet of Things (IoT) sensor ecologies. The concept of smart network technologies emerged with programmatic or HFT systems for automated stock market trading. In 2016, HFT was estimated to comprise 10–40% of the total US trading volume in equities, and 10–15% of the trading volume in foreign exchange and commodities (Aldridge & Krawciw, 2017). Another early example of a smart network technology is the RTB market, an automated marketplace for online display advertising (web-based ads). Impressions are sold in a RTB model in which advertisers bid for impressions in real time, as consumers visit websites. Although RTB is highly efficient, two pricing models persist in this market, both RTB and ahead-of-time reservation contracts (Sayedi, 2018). In other early smart network technologies, smart energy grids conduct automated load-rebalancing, in which the emergence of complex behavior has been noted, in particular the synchronization of coupled oscillators (Dorfler et al., 2013). Smart city IoT sensor ecologies indicate the substantial smart network challenges that are faced in coordinating the 50 billion connected objects that are estimated to be deployed by 2020 (Hammi et al., 2017).

2.3.2 Smart Networks 2.0

The contemporary generation of smart network technologies is Smart Networks 2.0 (Table 2.2). Although blockchain distributed ledgers and deep learning systems are some of the most prominent examples of smart network technologies, there are many kinds of such intelligent self-operating networks. Other examples include automated supply chain and logistics networks (TradeTech), autonomous vehicle networks (TransportTech), industrial robotics cloudminds, the potential quantum internet, Web 3.0’s internet of data structures, and virtual reality video gaming. Smart networks operate at scales ranging from the very large, in space logistics platforms (Supply Chain 4.0) (Chen & Ho, 2018), to the very small, for control systems in brain–computer interfaces (Swan, 2016) and human brain–cloud interfaces (Martins et al., 2019).

Table 2.2. Robust self-operating smart networks (Smart Networks 2.0).

Internet of value Internet of analytics
Blockchains Deep learning
EconTech, GovTech, PrivacyTech, ProofTech IDTech
Internet of goods and services Internet of vehicles
Automated supply chain Autonomous driving networks
TradeTech TransportTech
Internet of brains Internet of qubits
Cloudminds Quantum internet
Medical nanorobots (BCI)
NeuralTech QuantumTech
Internet of data structures Internet of virtual reality
Web 3.0 Gaming
DataTech (HashTech) VirtualRealityTech

2.3.2.1Blockchains

A blockchain is a distributed data structure which is cryptographically protected against modification, malicious or otherwise. Blockchains (technically, transaction blocks cryptographically linked together) are one topology among others in the more general class of distributed ledger technologies (Tasca & Tessone, 2019). Distributed ledger technology is EconTech and GovTech in the sense that institutional functions may be outsourced to computing networks (the administrative functions that orchestrate the patterns of human activity). Blockchains provide an alternative legal jurisdiction for the coordination of large groups of transnational actors using game theoretic incentives instead of policing, and economics as a design principle. Blockchains may be evolving into a new era that of PrivacyTech and ProofTech through zero-knowledge proof technology and verifiable computing.

2.3.2.2Machine learning: Deep learning neural networks

Machine learning is an artificial intelligence technology comprising algorithms that perform tasks by relying on information patterns and inference instead of explicit instructions. Deep learning neural networks are the latest incarnation of artificial intelligence, which is using computers to do cognitive work (physical or mental) that usually requires a human. Deep learning neural networks are mechanistic systems that “learn” by modeling high-level abstractions in data and cycling through trial-and-error guesses with feedback to establish a system that can make accurate predictions about new data. Machine learning systems are IDtech (identification technology), which conveys the ability to recognize objects (physical or digital), by analogy to FinTech, RegTech, TradeTech, and HealthTech as standard technologies that digitize, standardize, and automate their respective domains. Objects include patterns, structures, and other topological features that are within the scope of geometrical deep learning.

The premise of deep learning is that reality comprises patterns, which are detectable through data science methods. Deep learning is notable as a smart network technology that replaces hard-coded software with a capacity, in the form of a learning network that is trained to perform an activity. Whereas initially software meant fixed programs running in closed domains (Software 1.0), software is starting to mean programs that dynamically engage with reality in a scope which is not fully prespecified at the outset (Software 2.0).

2.3.2.3Internet of data structures (Web 3.0)

Web 3.0 means adding more functionality, collaboration, and trust to the internet (Web 1.0 was the read web, Web 2.0 the read/write web, and Web 3.0 the read/write/trust web). The idea is to install trust mechanisms such as privacy and verifiability from the beginning, directly into the software. Blockchains are in the lead of incorporating such PrivacyTech and ProofTech, and the functionality could spread to other domains.

Web 3.0 further connotes the idea of an internet of data structures. There are many different internet-based data structures such as block-chains, software codebases (Github), and various other content libraries. There may be a URL path to each content element. By using a Merkle tree structure tool, internet-based content trees become available to be called in other applications as distributed authenticated hash-linked data structures. A hash code (or simply hash) is the fixed-length output (often 32 bytes (64 characters) in blockchain protocols) of a hash function which is used to map data of arbitrary size onto data of a fixed size. A Merkle tree or hash tree is a tree in which every leaf node is labeled with the hash of a data block, and every non-leaf node is labeled with the cryptographic hash of the labels of its child nodes. Hash trees are widely used for the secure and efficient verification of the contents of large data structures.

Distributed authenticated hash-linked data structures can be deployed with a project from Protocol Labs called InterPlanetary Linked Data (IPLD). An earlier project InterPlanetary File System (IPFS) is a content-addressable file system (an improvement to file name-addressable systems which can result in errors when information paths no longer exist). IPLD is a data model for the content-addressable web in which all hash-linked data structures are treated as subsets of a unified information space, integrating all data models that link data with hashes as instances of IPLD. The web effectively becomes a Merkle forest of Merkle trees that can all be linked with interoperability through multi-hash protocols (connecting different hashing structures and content trees). These kinds of innovations are emblematic of infrastructural upgrades to the internet that facilitate privacy and security as important properties of smart network technologies.

2.3.3 Smart Networks 3.0: Quantum smart networks

In the farther future, quantum smart networks could comprise a next-generation of smart networks, Smart Networks 3.0 (Table 2.3). The first quantum smart network application is the quantum internet, which is already in the early stages of development for quantum key distribution and secure end-to-end communications. Quantum blockchains are a possibility, with quantum key distribution and a more substantial implementation of blockchain protocols in quantum information systems, possibly using new concepts such as proof-of-entanglement, holographic consensus, and quantum channels as the analog of payment channels. Quantum machine learning is already progressing (through quantum annealing optimization, quantum simulation, and geometric deep learning). Finally, there could be quantum brain–computer interfaces (for example, using interference-based amplitudes as a firing threshold mechanism).

Table 2.3. Quantum smart networks (Future-class Smart Networks 3.0).

Quantum internet (Q-internet) Quantum blockchains (QBC)
Quantum machine learning networks (QML) Quantum brain–computer interfaces (QBCI)

2.3.4 Smart network convergence

2.3.4.1Autonomous vehicle networks and robotic swarms

Many smart networks have fleet-many items that are autonomously coordinated. This includes unmanned vehicles (aerial, underwater, and space-based), autonomous-driving vehicles (automobiles, commercial trucks, and small transportation pods), drones, robotic swarms, and industrial robots. (Robotic swarms are multiple robots that are coordinated as one system.) Autonomous vehicle networks and robotic swarms must coordinate group behavior between themselves such as flocking, foraging, and navigation, in order to carry out tasks.

Complexity theory is used to study autonomous vehicle networks because they can exhibit undesirable emergent behavior such as thrashing, resource-starving, and phase change (Singh et al., 2017). Constraining flocking to optimized formations is a research focus for unmanned aerial vehicles, particularly those with autonomous strike capability (Vasarhelyi et al., 2018). Optimal control theory has been proposed as one basis for the development of risk management standards in self-controlling software (Kokar et al., 1999).

Applications for smart network fleets of self-coordinating machines arise in an increasing range of industrial and military use cases related to targeted material delivery, precision agriculture, and space-based and underwater exploration. Autonomous driving is a smart network technology with considerable interest. Some of the topics in contemporary research focus on collision avoidance (Woerner et al., 2019) and defining a set of international standards for the levels of driving automation (SAE, 2018).

There is a convergence story in that autonomous vehicle networks and robotic swarms are using other smart network technologies such as block-chain, machine learning, and IoT sensors. Blockchains offer a variety of functionality to autonomous vehicle networks including automated record-keeping, liability-tracking, compliance-monitoring, privacy, secure communications, and self-coordination. Notably, blockchain consensus algorithms are a method by which a group of agents can reach agreement on a particular state of affairs. Consensus is a generic technology that could be used for the self-coordination of any multi-agent system, not necessarily restricted to the context of mining and transaction confirmation. Hence, the blockchain-based self-governance of robotic swarm systems has been proposed (Ferrer, 2017).

The blockchain record-logging functionality together with smart contracts could be used to automatically file insurance claims when needed, and also register discovery claims (claiming rights over discovered objects) in new venues such as undersea and space-based exploration. Smart city IoT sensor networks could be used in conjunction with block-chains and robotic swarms for consumers and businesses to request robotic swarm-as-a-service functionality to send out a swarm to conduct a sensing project. Robotic sensor swarms could survey the aftermath of accidents to summon emergency medical and police services, be engaged to provide security, and scan pipelines and other infrastructure for routine maintenance. The IoT robotic swarm model is in some sense a realization of the science-fiction idea of fleets of entrepreneur-owned mobile video cams for hire (Brin, 2002), as a sort of next-generation citizen journalism.

Verifiable computing and zero-knowledge proofs enable a new level of smart network self-coordination and control. Advanced applications, particularly for military use, could include scenarios in which agents can cooperatively work towards a solution while having minimal information. For example, the mission information could be stored in a Merkle tree such that swarm operators can see the general blueprint of the mission without the raw data being disclosed (Ferrer et al., 2019). The swarm agents could use the secure communications and consensus-reaching properties of blockchains to coordinate, self-govern, and problem-solve. Further, zero-knowledge technology (which separates data verification from the underlying data) could be used in two ways, for an agent to obtain the Merkle tree-stored data relevant to its own activity, and to prove its integrity to peers by exchanging cryptographic proofs.

Various features of blockchains are implicated in this advanced swarm robotics model. The basic features are privacy and secure communication. Then, consensus technology is used for reaching a self-orchestrated group agreement without a centralized authority. Merkle tree path-addressing is used that only exposes need-to-know information. Finally, zero-knowledge proofs are used to prove earnest participation to peers without revealing any underlying personal information.

2.3.4.2Deep learning chains

Deep learning chains refers to the concept of a further convergence of smart networks in the notion of a generalized control technology that has properties of both blockchain and deep learning (Swan, 2018). Deep learning chains instantiate the secure automation, audit-log tracking, remunerability, and validated transaction execution of blockchains, and the object identification (IDtech), pattern recognition, and optimization technology of deep learning. Deep learning chains in the form of block-chain-based reinforcement learning have been proposed for an air traffic control system (Duong et al., 2019). Also, deep learning chains might be used as a general control technology for fleet-many internet-connected smart network technologies such as UAVs, drones, automated supply chain networks, robotic swarms, autonomous vehicle networks, and space logistics platforms. The minimal functionality of deep learning chains in autonomous driving fleets is identifying objects in a driving field (deep learning) and tracking vehicle activity (blockchain). Deep learning chains could likewise apply to the body, as a smart network control technology for medical nanorobots, identifying pathogens (deep learning) and tracking and expunging them (blockchain smart contracts). There could be greater convergence between individual smart network technology platforms (listed in Table 2.4 per their operating focus). For example, block-chains are starting to appear more regularly in the context of smart city power grid management (Pieroni et al., 2018).

2.3.4.3Deep learning proofs

Computational proofs are a mechanistic set of algorithms that could be incorporated as a feature in many smart network technology systems to provide privacy and validation. The potentially wide-scale adoption of zero-knowledge proof technology in blockchains makes blockchains a PrivacyTech and a ProofTech. Zero-knowledge proof technology could be similarly adopted in other smart network systems such as machine learning, for example in the idea of deep learning proofs. The first reason is the usual use case for proofs, to prove validity. This could be an important functionality in image recognition networks in autonomous driving for example, where the agent (the vehicle) is able to prove that certain behaviors were taken. Another reason to use proof technology is because proofs are an efficient mechanism with wider applicability beyond the proof execution context.

Table 2.4. Smart networks by operational focus.

Smart network Smart network operational focus
1.Unmanned aerial vehicles (UAVs) UAV drones with autonomous strike capability
2.High-frequency trading (HFT) Algorithmic trading (40% US equities), auto-hedging
3.Real-time bidding (RTB) Automated digital advertising placement
4.Energy smart grids Power grid load-balancing and transfer
5.Blockchain economic networks Transaction validation, self-governance, smart contracts
6.Deep learning networks Object identification (IDtech), pattern recognition, optimization
7.Smart City IoT sensor landscapes Traffic navigation, data climate, global information feeds
8.Industrial robotics cloudminds Industrial coordination (cloud-connected smart machines)
9.Supply chain logistics nets Automated sourcing, ordering, shipping, receiving, payment
10.Personal robotic assistant nets Personalization, backup, software updates, fleet coordination
11.Space: aerial logistics rings In situ resource provisioning, asynchronous communication

A central challenge in deep learning systems, which occupies a significant portion of research effort, is developing systems to efficiently calculate the error contribution of each node to the overall system processing. Various statistical error assessment methods are employed such as mean squared error (MSE), sum of squared errors of prediction (SSE), cross-entropy (softmax), and softplus (a smoothing function). An improved error contribution calculation method would be very helpful.

Proofs might be a useful solution because they are an information compression technique. Some portion of activity is conducted and the abstracted output is all that is necessary as a result (the proof evaluates to a one-bit True/False answer or some other short answer). With a proof structure, deep learning perceptrons could communicate their results using fewer information bits than they do now. The perceptron is a two-tier information system, with meta-attributes about its processing (error contribution, weights, biases) and the underlying values computed in the processing. The proof structure could be instantiated in the TensorFlow software architecture so that the proofs would be automatically generated as a feature that flows through the system’s matrix multiplications. The concept of a proof is that some underlying work is performed and a validated short answer is produced as the result. The idea of deep learning proofs is that in a deep learning system, perceptrons could execute a proof of their node’s contribution.

Deep learning consensus algorithms is another idea, in which consensus algorithms would be employed in deep learning systems such that perceptrons self-coordinate answers. Through the deep learning consensus algorithms, the perceptrons could self-orchestrate the processing of the nodes, and also their initial setup into an optimal configuration of layers and nodes for the problem at hand. Consensus technology is a mechanism for self-organization and governance in multi-agent systems. Deep learning consensus algorithms build on the idea of deploying consensus technologies in robotic swarms to self-coordinate to achieve mission objectives (Ferrer et al., 2019).

2.4Smart Network Field Theory: Classical and Quantum

The notion of smart network theory as a physical basis for smart network technologies is developed into the SNFT and the SNQFT, with respect to the two scale domains. The intuition is that the way to orchestrate many-particle systems from a characterization, control, criticality, and novelty emergence standpoint is through field theories such as an SNFT and an SNQFT. Such theories should be able to make relevant predictions about smart network systems as part of their operation.

Large-scale networks are a feature of contemporary reality. Such network entities are complex systems comprising thousands to billions of elements, and require a SNFT or other similar mechanism for the automated characterization, monitoring, and control of their activity. A theoretically-grounded model is needed, and smart network theories based on statistical physics (statistical neural field theory and spin-glass models), information theory (the AdS/CFT correspondence), and model systems are proposed. Generically, a SNFT (conventional or quantum) is a field theory for the characterization, monitoring, and control of smart network systems, particularly for criticality detection and fleet-many item management.

2.4.1 Theory requirements: Characterize, monitor, and control

The purpose of SNFTs is the characterization, monitoring, and control of smart network systems. The first objective is characterization. It is necessary to develop standard indicators and metrics to easily identify specific behaviors in smart network systems as they evolve and possibly grow in scalability and deployment. Both positive (emergent innovation) and negative (flash crash) behaviors should be assessable by the theory.

The second objective of an SNFT is to provide monitoring, at both the individual element and overall system level, of current and evolving behavior. Monitoring pertains to smart network operations that are currently unfolding, and also may be developing in the future. For example, in the farther future, deep thinkers (advanced deep learning systems) might go online. Although deep learning networks are currently isolated and restricted to certain computational infrastructures, it is imaginable that learning algorithms might be introduced to the internet. Risk management is a key concern. A Deep Thinkers Registry could be a safeguard for tracking an entity’s activity, with possible annual review by a Computational Ethics Review Board for continued licensing. This is a future example that demonstrates the intended extensibility of SNFTs, and the uncertain future situations that they might help to facilitate.

The third objective of an SNFT is control, for the coordination of fleet-many items. Orchestrating fleet-many items is a clear automation economy use case for smart network technologies. This involves the ability to securely coordinate fleet-many items in any kind of internet-connected smart network system, which could include autonomous vehicles, drones, blockchain peer-to-peer nodes, deep learning perceptrons, smart city IoT sensors, home-based social robots, medical nanorobots, and supply chain shipment-receiving. The longer-term range of deployment of smart network technologies could extend to the very small, such as the cellular domains of the body, and the very large, such as civilization building in space.

2.4.1.1Fleet-many coordination and system criticality

Whereas the practical application of SNFT is the automated coordination of fleet-many items, the risk management application is being able to detect and possibly avert unwanted critical moments such as potential phase transition. A crucial aspect of an SNFT is the predictive risk management of system criticality. It is important to have a mathematical and theoretical basis for understanding smart networks so that the critical points and phase transitions may be predictively managed to the extent possible. The events that could constitute criticality and phase transition in smart networks are both expected and emergent situations from both within and outside the network. Some examples are financial contagion, network security, novel technology emergence, and electromagnetic pulses.

SNFTs could also be useful in the well-formed design of smart network systems. They provide a formal scientific basis for studying smart networks as new technological objects in the contemporary world, particularly since smart networks are a nascent, evolving, and high-impact situation.

2.5Smart Network Field Theory Development

More precisely, an SNFT is any formal method for the characterization, monitoring, control of smart network systems such as blockchains and deep learning networks. Although there are different kinds of smart networks, blockchain and deep learning are the focus for developing a field theory because they are the most sophisticated, robust, and conceptually novel.

2.5.1 The “field” in field theory

The term “field” is meant both analogically and literally (in the physical sense). Other terms invoked in this work such as temperature and pressure also may have both precise analytical meanings in the physical context and conceptual meanings. Terms may be applied conceptually as to the purpose and function they are serving in smart network systems.

There are two primary meanings of field in the conceptual sense. First and most generally, field refers to the ability to control multiple items as one unit. The requisite functionality of the SNFT is to manage fleet-many items. One idea is to control them as a field, in which the term field might be dynamically-defined based on location, energy, probability, gradients, or other parameters. The concept of field might be used to coordinate thousands and millions of constituent elements (such as blockchain peer-to-peer nodes or deep learning perceptrons). An example of an existing smart network field operation is optogenetics (in which neurons express a protein that makes their electrical activity controllable by light) (Boyden, 2015). Optogenetics is a “light switch for a field of neurons” in that it conveys the ability to turn on or off a field of neurons all at once. Thus, an SNFT is created, in that optogenetically enabled cells are controlled as a field as opposed to individually (Swan, 2021).

The second meaning of field in SNFTs is that a field might refer to the situation in which each element in a system has its own measure and contribution to an overall metric or network activity (possibly being used to calculate a Hamiltonian or other composite measure of a system). This concept of field (from effective field theory development in physics) suggests that every point in a landscape has a computable value, generally referring to the idea that a function has a value everywhere throughout the space, at every location in the field.

2.5.1.1Scalar, vector, and tensor

SNFTs may be structured in scalar, vector, and tensor terms. A tensor is a complex mathematical object (a geometric object that maps functions and interacts with other mathematical objects). The simplest tensors are scalars (zero-rank tensors) and vectors (first-rank tensors). A scalar is a tensor with magnitude but no direction (a zero-rank point), and is described by one number. Mass and temperature are scalars. A vector is a tensor with magnitude and direction (representable by a first-rank tensor or vector line). A vector is often represented as an arrow, and defined with respect to a coordinate system. Force and velocity are vectors. A tensor is a multidimensional structure, representable by a matrix. Tensors are seen in the context of deep learning. Google’s tensor processing units (TPUs) and TensorFlow software use tensors in the sense conducting very-fast matrix multiplications. A tensor is representable by a multi-dimensional matrix, and TPUs and TensorFlow are fast because they flow through matrix multiplications directly without storing intermediate values in memory.

2.5.2 Statistical physics

A convenient theoretical approach to SNFT is based on statistical physics. Statistical physics is selected because of its focus on probability and inference in large systems. The two model systems used to elaborate the SNFT are also based on statistical models in physical systems (the brain and disordered magnets).

All of physics in my view, will be seen someday to follow the pattern of thermodynamics and statistical mechanics

— John Archibald Wheeler (1983, p. 398)

Statistical physics is a catchall that includes statistical mechanics, probability, and thermodynamics. The great benefit of statistical physics is that it provides a generalized method, based in probability, for linking microscopic noise to macroscopic labels (Mayants, 1984, p. 174). Smart networks are also fundamentally based on probability. Smart network technologies such as blockchain and deep learning are probabilistic state machines that coordinate thousands to millions of constituent elements (processing nodes, whether perceptrons or miners, which can be seen as particles) to make high-probability guesses about reality states of the world. Hence, statistical physics could be a good basis for the formalization of smart networks.

Maxwell was among the first to suggest the application of probability as a general model for the study of the science of the very small. Statistical mechanics was likewise intended as a general method based on probability in its early development by Gibbs, following on from classical mechanics (Gibbs, 1902). The aim of statistical mechanics is to address all aspects of mechanical systems, at both the microscopic and macroscopic levels, for example, to explain transitions between gaseous and non-gaseous states.

The thermodynamic aspect of statistical physics is relevant in the formulation of SNFTs because smart networks are physical systems with thermodynamic effects. Blockchains are worldwide physical network systems, comprising about 10,000 nodes on average hosting the transaction ledger for Bitcoin [10,403 (Bitnodes, 2019)] and Ethereum [8,141 (Ethernodes, 2019)]. Deep learning networks too have a physical basis, in that they run on dedicated hardware systems (NVIDIA GPU networks and Google TPU clusters). Concepts such as work, heat, and energy have thermodynamical measures in smart network systems. Blockchains perform work in the form of consensus algorithms (proof-of-work, proof-of-stake, etc.) being a primary mechanism for providing network security and updating the ledger balances of the distributed computing system. Deep learning networks also perform work in the sense of running an operating cycle to derive a predictive classification model for data. The network expounds significant resources to iteratively cycle forward and back through the layers to optimize trial-and-error guesses about the weighting of relevant abstracted feature sets such that new data can be correctly identified.

Technophysics formulations of blockchains and deep learning have been proposed on the basis of thermodynamic properties. For example, a blockchain proof-of-work consensus process could be instantiated as an energy optimization problem with Hamiltonian optimizers and executed as a quantum annealing process on quantum computers (Kalinin & Berloff, 2018). In deep learning, a thermodynamics of machine learning approach is used to propose representation learning as an alternative framework for reasoning in machine learning systems, whose distortion could be measured as a thermodynamical quantity (Alemi & Fischer, 2018).

2.6Field Theory

A field theory is a theory that describes a background space and how the constituent elements in the space behave. In classical physics, a field is a region in which each point is affected by a physical quantity, be it a force, a temperature or any other scalar, vector, or tensor quantity. For example, objects fall to the ground because they are affected by the force of Earth’s gravitational field. A field is a region of space that is affected by a physical quantity that can be represented with a number or a tensor (multi-dimensional number), that has a value for each point in space and time. A weather map, for example, has a temperature assigned to each point on a map. The temperatures may be studied at a fixed point in time (today’s temperature) or over a time interval to understand the dynamics of the system (the effects of temperature change).

Field theories are a particularly good mechanism for studying the dynamics of a system. The dynamics of a system refers to how a system changes with time or with respect to other independent physical variables upon which the system depends. The dynamics are obtained by writing an equation called a Lagrangian or a Hamiltonian of the field, and treating it as a classical or quantum mechanical system, possibly with an infinite number of degrees of freedom (parameters). The resulting field theories are referred to as classical or quantum field theories. The dynamics of a classical field are typically specified by the Lagrangian density in terms of the field components; the dynamics can be obtained by using the action principle. The dynamics of a quantum field are more complicated. However, since quantum mechanics may underlie all physical phenomena, it should be possible to cast a classical field theory in quantum mechanical terms, at least in principle, and this is assumed in the SNFT and SNQFT constructions.

2.6.1 The field is the fundamental building block of reality

At the quantum mechanical scale, the intuition behind field theory is that fields, not particles, may be the fundamental building blocks of reality. For example, Feynman points out that in the modern framework of the quantum theory of fields, even without referring to a test particle, a field occupies space, contains energy, and its presence precludes a classical true vacuum. This has led physicists to consider fields to be physical entities and a foundational aspect of quantum mechanical systems. The fact that the electromagnetic field can possess momentum and energy makes it very real (Feynman, 1970). One resulting interpretation is that fields underlie particles. Particles are produced as waves or excitations of socalled matter fields. Reality may be composed of fluid-like substances (having properties of flow) called fields. Quantum mechanical reality may be made up of fields, not particles.

2.6.2 Field theories: Fundamental or effective

Theories are either fundamental or effective. Fundamental theories are foundational universal truths and effective theories are reasonably effective theories, given the absence of additional proof or knowledge. Fundamental theories have the tony weight of absolute truth. Effective theories serve effectively in the sense of being a reasonable approximation of situations that are not yet fully understood. Classical theories of physics were initially thought to be fundamental, but then found not to be valid everywhere in the universe. Newtonian physics describes pulleys, but not electrons or detailed planetary movement (Einsteinian physics is used in GPS technology). In this sense, all theories of nature are effective theories, in that each is a possible approximation of some more fundamental theory that is as yet unknown.

There is another sense of the meaning of effective field theories, which is that the theory is only effective within a certain range. An effective theory may only be true within certain parameters or regimes, typically whatever domain or length-scale is used to experimentally verify the theory (Williams, 2017). For example, an effective field theory is a way to describe what happens at low energies and long wavelengths (in the domain of general relativity) without having a complete picture of what is happening at higher energies (in the domain of quantum mechanics). In high-energy physics (particle physics), processes can be calculated with the so-called Standard Model without needing to have a complete picture of grand unification or quantum gravity. The opposite is also true, when calculating problems in low-energy physics (gravitational waves), the effects of higher-energy physics (particle physics) can be bracketed out or summed up with a few measurable parameters (Carroll et al., 2014). Each domain has field theories that are effective within its scale-range. The difficulty is deriving field theories that explain situations in which high-energy physics and low-energy physics come together such as black holes.

2.6.2.1Effective field theories in quantum mechanics

Whereas a classical field theory is a theory of classical fields, a quantum field theory is a theory of quantum mechanical fields. A classical field theory is typically specified in conventional space and time (the 3D space and time of Euclidean macroscale reality). On the other hand, a quantum field theory is specified on some kind of background of different models of space and time. To reduce complexity, quantum field theories are most generically placed on either a fixed background such as a flat space, or a Minkowski space (3D quantum mechanical space–time). Whatever space and time region in which the quantum field theory is specified, the idea is to quantize the geometry and the matter contents of the quantum field into an effective theory that can be used to perform calculations. Effective field theories are useful because they can span classical and quantum domains, and more generally, different levels in systems with phase transitions. The SNFT is both a classical field theory and a quantum field theory.

2.6.3 The smart network theories are effective field theories

The SNFTs start with the idea that an effective field theory is a type of approximation, or effective theory, for an underlying physical theory (smart networks in this case). The effective field theory is a precision tool that can be used to isolate and explain a relevant part of a system in simpler terms that are analytically solvable. An effective field theory includes the appropriate degrees of freedom (parameters) to describe the physical phenomena occurring at a chosen length-scale or energy-scale within a system, while ignoring substructure and degrees of freedom at other distances or energies (Giorgi et al., 2004). The strategy is to average over the behavior of the underlying theory at shorter length-scales to derive what is hoped to be a simplified model for longer length-scales, which applies to the overall system.

Effective field theories connote the existence of different scale levels within a system. They have been used to explain domains and simplify problems in many areas of particle physics, statistical mechanics, condensed matter physics, superconductivity, general relativity, and hydrodynamics. In condensed matter physics, effective field theories can be used, for example, to study multi-electron atoms, for which solving the Schrödinger equation is not feasible. In particle physics, effective field theories attempt to explain problems such as the Fermi theory of beta decay. In general relativity, effective field theories have been used to simplify gravitational wave problems, and theorize that general relativity itself may be the low-energy effective field theory of a full theory of quantum gravity (in which the expansion scale is the Planck mass). Particularly relevant for quantum computing is the practical application of effective field theories in the domains of superconductivity and condensed matter physics.

2.6.4 Complex multi-level systems

A key requirement for a SNFT is that it can be used to manage across diverse scale levels within a complex system. Such a field theory should be able to “identify macroscopic smoothness from microscopic noise” as prescribed by complexity theory (Mitchell, 2009). Various methods, including statistical physics, may be used for linking multiple dimensions within complex systems to obtain signal from noise.

Some aspects of a system are easier to measure on different scales. For example, computing the energy spectrum of the Hamiltonian at different levels of quantum mechanical systems can be challenging. Such calculations may be straightforward at higher levels of the system abstraction, but more difficult when incorporating the energetic fields in which the particles actually behave. At this scale, it is essentially impossible to compute because there is so much data about particle movement. One strategy is to reinterpret particles as states of a quantized field (Jaffe & Witten, 2001). A field theory helps to reinstantiate or roll the system up to a higher level of abstraction at which such calculations can be made. The method is finding or defining an effective field theory at a scale that renders the system analytically solvable.

For example, the elliptical orbits of the planets are more easily calculated with Newtonian gravity than with general relativity. This simplification can be all that is necessary for certain applications. The benefit of a field theory is that is provides the ability to focus on a particular scale of a system, emphasizing one aspect while limiting others (Georgi, 1993). The objective is to find the simplest framework that captures the essential physics of the target area. For example, when there is interest in lighter particles (such as bottom quarks), heavier particles (e.g. z-bosons and w-bosons) can be eliminated from the model.

In complex multi-level systems, identifying a macroscopic term corresponding to microscopic behavior is a key challenge. The analogs to the temperature and pressure terms arising from a room of septillions of moving particles in a model system are not always clear. Hence, an effective field theory is a formal process that can be used to identify a system’s “temperature” term and other system-level metrics. Effective field theories are similar to the renormalization concept (in the sense of mathematically scaling to a different level of the system to focus on a parameter of interest in a simplified manner that can be calculated).

2.7Five Steps to Defining an Effective Field Theory

Effective field theories are important because there is interesting physics at all scales. Being able to portably travel up and down scale dimensions can make it easier to analyze certain aspects of systems. The idea is to use effective field theories as a tool for isolating parameters of interest within a system and engaging the system at that level. Effective field theories may work best when there is a large separation between the length scale of interest and the length scale of the underlying dynamics.

A distillation of the steps involved in deriving an effective field theory is outlined in Table 2.5. The aim of an effective field theory is to specify the simplest framework that captures the essential physics of interest. The zeroth step is to confirm that there are no already existing fundamental theories to describe the phenomenon and that an effective field theory is useful. Theories with related aspects could be identified as inspiration.

Table 2.5. Steps in articulating an effective field theory.

StepDescription
1.Define the systemCharacterize the overall scope, shape, and levels of the system, including the relevant scales, lengths, and energies.
2.Identify system elementsIdentify the constituent elements of the system, the kinds of interactions between them.
3.Isolate variables of interestArticulate the aspects of interest that the field theory should study.
4.Reduce complexity by eliminating unnecessary system substructureIdentity the degrees of freedom (the aspects of the system that matter for the problem of study), and irrelevant substructure that can be ignored, and note symmetries, anomalies, or other known complexity attributes.
5.Identify quantitative metricsArticulate the mathematics to measure the system, averaging the underlying behavior to derive a simplified model with a global term such as a Hamiltonian or Lagrangian.

Source: Adapted from Manohar (2017).

The first of the five steps is to define the system by characterizing the overall scope and shape of the system to be studied, including the relevant scale levels in terms of lengths or energies that comprise the system. The second step is to identify the system elements, the particles or other elements that constitute the system, and the kinds of interactions between them. The third step is to isolate the particular variables of interest that the theory aims to study. The fourth step is to reduce complexity by eliminating the unnecessary system substructure which can be ignored for studying the variables of interest within the system. More detailed aspects of the subsystem of interest are identified such as the degrees of freedom (system parameters) and any complexity properties such as symmetry and anomalies that may influence the theory application. The fifth step is identifying the relevant quantitative metrics for measuring the system. The available quantities in the system are identified, and averaged over to generate a metric as a system composite measure such as a Hamiltonian or Lagrangian.

An effective field theory example in a biological neural network is that the system-wide quantity of interest might be the spiking activation (the threshold at which neurons fire), and other data would be superfluous. Another example is a minimal effective field theory that only specifies the fields, the interactions, and the power counting of the system (the dimensions of power counting across scales).

Beyond the basic steps, effective field theories might include more complicated aspects. There could be additional quantities to measure such as available potential energy, propagation, and the range of system states. Also relevant is identifying the dynamics of the system, the dimensions into which the system is expanding. There could be various degrees of freedom. The term degrees of freedom generally connotes system parameters. More specifically, degrees of freedom is a statistical term that means each of a number of independently variable factors that can affect the range of states in which a system may exist, and the directions in which independent motion can occur. Degrees of freedom can be conceived simply as system parameters, and with greater sophistication as a statistical measure of the number of states and ways in which a dynamic system can move, whether this “motion” is considered in physical space or in an abstract space of configurations.

Overall, the key steps in specifying an effective field theory consist of defining (1) the system, (2) the system elements and interactions, (3) the variables of interest, (4) the irrelevant structure that can be ignored, and (5) the quantitative metrics that can be averaged over the system to produce a temperature-type term. Applying the effective field theory development technique to the smart network context, the idea is to consider the different levels and dimensions of the system, and identify the elements, interactions, and relevant quantities to calculate in order to obtain the system behavior. This is developed in more detail in Chapters 11 and 12.

References

Aldridge, I. & Krawciw, S. (2017). Real-Time Risk: What Investors Should Know About Fintech, High-Frequency Trading and Flash Crashes. Hoboken, NJ: Wiley.

Alemi, A.A. & Fischer, I. (2018). TherML: Thermodynamics of Machine Learning. ICML 2018. Theoretical Foundations and Applications of Deep Generative Models Workshop.

Bitnodes. https://bitnodes.earn.com/. Accessed June 30, 2019.

Boyden, E.S. (2015). Optogenetics and the future of neuroscience. Nat. Neurosci. 18:1200–1.

Brin, D. (2002). Kiln People. New York, NY: Tor Books.

Carroll, S.M., Leichenauer, S. & Pollack, J. (2014). A consistent effective theory of long-wavelength cosmological perturbations. Phys. Rev. D. 90:023518.

Chen, H. & Ho, K. (2018). Integrated space logistics mission planning and spacecraft design with mixed-integer nonlinear programming. J. Spacecr. Rockets. 55(2):365–81.

Dorfler, F., Chertkov, M. & Bullo, F. (2013). Synchronization in complex oscillator networks and smart grids. PNAS 110(6):2005–10.

Duong, T., Todi, K.K. & Chaudhary, U. (2019). Decentralizing air traffic flow management with blockchain-based reinforcement learning. Aalto University, Finland.

Ethernodes. https://ethernodes.org/network/1. Accessed June 30, 2019.

Ferrer, E.C. (2017). The blockchain: A new framework for robotic swarm systems. arXiv:1608.00695 [cs.RO].

Ferrer, E.C., Hardjono, T., Dorigo, M. & Pentland, A. (2019). Secure and secret cooperation of robotic swarms by using Merkle trees. arXiv:1904.09266 [cs.RO].

Feynman, R.P. (1970). The Feynman Lectures on Physics. Vol I. London, UK: Pearson PTR.

Georgi, H. (1993). Effective field theory. Annu. Rev. Nucl. Part. Sci. 43:209–52.

Gibbs, J.W. (1902). Elementary Principles in Statistical Mechanics. New York, NY: Scribner.

Giorgi, G.A, Guerraggio A. & Thierfelder J. (2004). Mathematics of Optimization: Smooth and Nonsmooth Case. London, UK: Elsevier.

Hammi, B., Khatoun, R., Zeadally, S. et al. (2017). Internet of Things (IoT) technologies for smart cities. IET Networks. 7.

Jaffe, A & Witten, E. (2001). Quantum Yang–Mills Theory, 1–14. https://www.claymath.org/sites/default/files/yangmills.pdf. Accessed June 30, 2019.

Kalinin, K.P. & Berloff, N.G. (2018). Blockchain platform with proof-of-work based on analog Hamiltonian optimisers. arXiv:1802.10091 [quant-ph].

Kokar, M.M., Baclawski, K. & Eracar, Y.A. (1999). Control theory-based foundations of self-controlling software. IEEE Intell. Syst. App. 14(3):37–45.

Manohar, A.V. (2017). Introduction to effective field theories. EFT (Particle Physics and Cosmology) July 3–28, 1–94.

Martins, N.R.B., Angelica, A., Chakravarthy, K. et al. (2019). Human brain/cloud interface. Front. Neurosci. 13:112.

Mayants, L. (1984). The Enigma of Probability and Physics. New York, NY: Springer.

Mitchell, M. (2009). Complexity: A Guided Tour. Oxford, UK: Oxford University Press.

Pieroni, A., Scarpato, N., Di Nunzio, L. et al. (2018). Smarter city: smart energy grid based on blockchain technology. Intl. J. Adv. Sci. Eng. Info. Tech. 8(1):298–306.

SAE (2018). SAE International Releases Updated Visual Chart for Its “Levels of Driving Automation” Standard for Self-Driving Vehicles. SAE.

Sayedi, A. (2018). Real-time bidding in online display advertising. Market. Sci. 37(4):553–68.

Singh, S., Lu, S., Kokar, M.M. & Kogut, P.A. (2017). Detection and classification of emergent behaviors using multi-agent simulation framework (WIP). Spring 2017 Simulation Multi-Conference (SCS).

Swan, M. (2015). Blockchain: Blueprint for a New Economy. Sebastopol, CA: O’Reilly Media.

Swan, M. (2016). The future of brain-computer interfaces: Blockchaining your way into a cloudmind. JET. 26(2).

Swan, M. (2018). Blockchain for business: Next-generation enterprise artificial intelligence systems. In: Raj, P., Deka, G.C. (eds). Advances in Computers, Vol. 111. Blockchain Technology: Platforms, Tools and Use Cases. London, UK: Elsevier.

Swan, M. (forthcoming). Technophysics, Smart health networks, and the biocryptoeconomy. In: Boehm, F. (ed). Nanotechnology, Nanomedicine, and AI: Toward the Dream of Global Health Care Equivalency. Boca Raton, FL: CRC Press.

Tasca, P. & Tessone, C.J. (2019). A taxonomy of blockchain technologies: Principles of identification and classification. Ledger 4.

Vasarhelyi, G., Viragh, C., Somorjai, G. et al. (2018). Optimized flocking of autonomous drones in confined environments. Sci. Robot. 3(20):eaat3536.

Wheeler, J.A. (1983). On recognizing ‘law without law.’ Oersted Medal Response at the joint APS-AAPT Meeting, New York, 25 January 1983. Am. J. Phys. 51(5):398–406.

Williams, M. (2017). Effective Theory, Motifs in Physics Series. Harvard University, pp. 1–6.

Woerner, K., Benjamin, M.R., Novitzky, M. & Leonard, J.J. (2019). Quantifying protocol evaluation for autonomous collision avoidance. Auton. Robots. 43(4):967–91.

Quantum Computing

Подняться наверх