Читать книгу From Traditional Fault Tolerance to Blockchain - Wenbing Zhao - Страница 11
Оглавление
List of Figures
1.1 An example of a chain of threats with two levels of recursion.
1.2 The rollback recovery is enabled by periodically taking checkpoints and usually logging of the requests received.
1.3 With redundant instances in the system, the failure of a replica in some cases can be masked and the system continue providing services to its clients without any disruption.
1.4 Main types of assets in a distributed system.
2.1 An example distributed system.
2.2 Consistent and inconsistent global state examples.
2.3 An example of the domino effect in recovery with uncoordinated checkpointing.
2.4 Finite state machine specification for the coordinator in the Tamir and Sequin checkpointing protocol.
2.5 Finite state machine specification for the participant in the Tamir and Sequin checkpointing protocol.
2.6 Normal operation of the Tamir and Sequin checkpointing protocol in an example three-process distributed system.
2.7 Finite state machine specification for the Chandy and Lamport distributed snapshot protocol.
2.8 Normal operation of the Chandy and Lamport global snapshot protocol in an example three-process distributed system.
2.9 A comparison of the channel state definition between (a) the Chandy and Lamport distributed snapshot protocol and (b) the Tamir and Sequin global checkpointing protocol.
2.10 Example state intervals.
2.11 An example for pessimistic logging.
2.12 Transport level (a) and application level (b) reliable messaging.
2.13 Optimization of pessimistic logging: (a) concurrent message logging and execution (b) logging batched messages.
2.14 Probability density function of the logging latency.
2.15 A summary of the mean logging latency and mean end-to-end latency under various conditions.
2.16 Probability density function of the end-to-end latency.
2.17 Normal operation of the sender-based logging protocol.
2.18 An example normal operation of the sender-based logging protocol.
2.19 Two concurrent failures could result in the loss of determinant information for regular messages.
3.1 The three-tier architecture.
3.2 The Java EE architecture.
3.3 An example runtime path of an end-user request.
3.4 Component class and component instances.
3.5 The chi-square cumulative distribution function for degree of freedom of 1, 2, 3, 4, 5.
3.6 The path shape of the example runtime path shown in Figure 3.3.
3.7 Component class and component instances.
3.8 Dependency templates for nodes, processes, network paths, and the neighbor sets.
3.9 A partial dependency graph for an example system.
3.10 The error function.
3.11 A hypothetical dependency graph with abnormality for each component and the weight for each edge labeled.
3.12 The components that form a cycle in the f-map are reduced to a single unit in the r-map for recursive recovery.
3.13 The architecture of an Operator Undo framework.
4.1 The replication algorithm is typically implemented in a fault tolerance middleware framework.
4.2 Active replication, without (top) and with (bottom) voting at the client.
4.3 Passive replication.
4.4 Semi-active replication.
4.5 A write-all algorithm for data replication.
4.6 The problem of the write-all-available algorithm for data replication.
4.7 Preventing a transaction from accessing a not-fully-recovered replica is not sufficient to ensure one-copy serializable execution of transactions.
4.8 An example run of the quorum consensus algorithm on a single data item.
4.9 Basic steps for optimistic data replication for an operation-transfer system.
4.10 An example run of a system with three sites that uses Lamport clocks.
4.11 An example run of a system with three sites that uses vector clocks.
4.12 An example for the determination of the new version vector value after reconciling a conflict.
4.13 An example operation propagation using vector clocks in a system with three replicas.
4.14 An example for operation propagation using timestamp matrices in a system with three replicas.
4.15 Update commit using ack vectors in a system with three replicas.
4.16 Update commit using timestamp matrices in a system with three replicas.
4.17 An illustration of the CAP theorem.
4.18 Partition mode and partition recovery.
5.1 Examples of systems that ensure uniform total ordering and nonuniform total ordering.
5.2 In the sequencer based approach, a general system is structured into a combination of two subsystems, one with a single receiver and the other with a single sender of broadcast messages.
5.3 An example rotation sequencer based system in normal operation.
5.4 Normal operation of the membership view change protocol.
5.5 Membership change scenario: competing originators.
5.6 Membership change scenario: premature timeout.
5.7 Membership change scenario: temporary network partitioning.
5.8 A simplified finite state machine specification for Totem.
5.9 A successful run of the Totem Membership Protocol.
5.10 Membership changes due to a premature timeout by N2.
5.11 Messages sent before N1 fails in an example scenario.
5.12 Messages delivered during recovery for the example scenario.
5.13 Message sent before the network partitions into two groups, one with {N1, N2}, and the other with {N3, N4, N5}.
5.14 Messages delivered during recovery in the two different partitions for the example scenario.
5.15 Causal ordering using vector clocks.
6.1 Normal operation of the Paxos algorithm.
6.2 A deadlock scenario with two competing proposers in the Paxos algorithm.
6.3 If the system has already chosen a value, the safety property for consensus would hold even without the promise-not-to-accept-older-proposal requirement.
6.4 If two competing proposers propose concurrently, the system might end up choosing two different values without the promise-not-to-accept-older-proposal requirement.
6.5 With the promise-not-to-accept-older-proposal requirement in place, even if two competing proposers propose concurrently, only a single value may be chosen by the system.
6.6 Normal operation of Multi-Paxos in a client-server system with 3 server replicas and a single client.
6.7 View change algorithm for Multi-Paxos.
6.8 With reconfigurations, a group of 7 replicas (initially 5 active and 2 spare replicas) can tolerate up to 5 single faults (without reconfigurations, only up to 3 faults can be tolerated).
6.9 The Primary and secondary quorums formation for a system with 3 main replicas and 2 auxiliary replicas.
6.10 The Primary and secondary quorums formation as the system reconfigures due to the failures of main replicas.
6.11 Normal operation of Cheap Paxos in a system with 3 main replicas and 1 auxiliary replica.
6.12 The Primary and secondary quorums formation for a system with 3 main replicas and 2 auxiliary replicas.
6.13 Normal operation of (Multi-) Fast Paxos in a client-server system.
6.14 Collision recovery in an example system.
6.15 Expansion of the membership by adding two replicas in method 1.
6.16 Expansion of the membership by adding two replicas in method 2.
6.17 Reduction of the membership by removing two replicas one after another.
7.1 Two scenarios that highlight why it is impossible to use 3 generals to solve the Byzantine generals problem.
7.2 The message flow and the basic steps of the OM(1) algorithms. 252
7.3 The message flow and the basic steps of the OM(2) algorithms. 254
7.4 Normal operation of the PBFT algorithm.
7.5 PBFT view change protocol.
7.6 A worst case scenario for tentative execution.
7.7 Normal operation of Fast Byzantine fault tolerance.
7.8 Zyzzyva agreement protocol (case 1).
7.9 Zyzzyva agreement protocol (case 2).
7.10 A corner case in view change in Zyzzyva.
8.1 Bitcoin nodes.
8.2 The relationship between private key, public key, and address in Bitcoin.
8.3 Bitcoin transaction structure.
8.4 An example transaction chain in Bitcoin.
8.5 Bitcoin transactions per block data since its inception in 2009 through September 15, 2020. The data are downloaded from https://www.blockchain.com/charts/n-transactions-per-block.
8.6 Bitcoin block structure.
8.7 An issue with Bitcoin Merkle tree computation where different trees could produce the same Merkle root.
8.8 Bitcoin blockchain consensus and conflict resolution.
8.9 Structure of Ethereum transaction.
8.10 State transition via transaction in Bitcoin and Ethereum.
8.11 Ethereum smart contract structure.
8.12 An example transaction receipt in the JSON format. The content is color-coded. The yellow blocks are identifier information for the transaction, the contract invoked, and the block in which the transaction reside. The blue block contains the cumulative gas used. The green block contains the logs. The red block contains the logs Bloom filter string. The purple block contains the status of the transaction (success or not). The pink block contains the gas used for this transaction alone.
8.13 Ethereum block structure.
8.14 The annotated source code on verification of an ommer block.
8.15 An example on what kind of stale blocks may be chosen as an ommer block.
8.16 The annotated source code on the block reward scheme in Ethereum.
8.17 The cache size vs. the epoch number.
8.18 The dataset size vs. the epoch number.
8.19 The Ethash algorithm.
8.20 The double-spending attack steps.
9.1 A model for public blockchain consensus.
9.2 Main loop used by a mining node to compete in the creation of a new block using PoS in PeerCoin.
9.3 Major steps in the CreateNewBlock function in PeerCoin PoS.
9.4 Major steps in the CreateCoinStake function in PeerCoin PoS.
9.5 Major steps in the CheckStakeKernelHash function in PeerCoin PoS.
9.6 Information included in the data stream for computing PoS hash.
9.7 Major steps in PoET consensus.
10.1 Main benefits of blockchain for applications.
10.2 A model for cyber-physical systems.
10.3 Blockchain-enabled CPS applications.
10.4 Key operations and their relationship with the CPS applications and the blockchain benefits.
10.5 Basic CPS operations with respect to the latency and throughput requirements.
10.6 Stale block rate for different block sizes and block intervals.
10.7 Throughput for different combinations of block sizes and block intervals.
10.8 Payment channel operation.
10.9 Two level logging for sensing data with blockchain.
10.10 The format for the raw data (together with the aggregated data tuple) for local logging.
10.11 Summary of the token paradigm.
10.12 A classification of blockchain applications based on token usage.
10.13 The impossibility trinity hypothesis.