Digital Communications 1
Реклама. ООО «ЛитРес», ИНН: 7719571260.
Оглавление
Safwan El Assad. Digital Communications 1
Table of Contents
List of Illustrations
List of Tables
Guide
Pages
Digital Communications 1. Fundamentals and Techniques
Foreword
Introduction to Part 1
1. Introduction to Telecommunications. 1.1. Role of a communication system
1.1.1. Types of services offered by communication systems
1.1.2. Examples of telecommunications services
1.2. Principle of communication
1.3. Trend towards digital communications
2. Measurement of Information of a Discrete Source and Channel Capacity. 2.1. Introduction and definitions
2.2. Examples of discrete sources. 2.2.1. Simple source (memoryless)
2.2.2. Discrete source with memory
2.2.3. Ergodic source: stationary source with finite memory
2.2.4. First order Markovian source (first order Markov chain)
2.3. Uncertainty, amount of information and entropy (Shannon’s 1948 theorem)
2.3.1. Entropy of a source
2.3.2. Fundamental lemma
2.3.3. Properties of entropy
2.3.4. Examples of entropy. 2.3.4.1. Two-event entropy (Bernoulli’s law)
2.3.4.2. Entropy of an alphabetic source with (26 + 1) characters
2.4. Information rate and redundancy of a source
2.5. Discrete channels and entropies
2.5.1. Conditional entropies
2.5.2. Relations between the various entropies
2.6. Mutual information
2.7. Capacity, redundancy and efficiency of a discrete channel
2.7.1. Shannon’s theorem: capacity of a communication system
2.8. Entropies with k random variables
3. Source Coding for Non-disturbance Channels. 3.1. Introduction
3.2. Interest of binary codes
3.3. Single decoding codes
3.3.1. Regular code
3.3.2. Single-decoded code (decipherable or decodable code)
3.3.3. Instantaneous code (irreducible code)
3.3.4. Prefix
3.3.5. Design of an instantaneous binary code
3.3.6. Kraft–McMillan inequality
3.4. Average codeword length. 3.4.1. Coding efficiency in terms of transmission speed
3.4.2. Minimum average codeword length
3.5. Capacity, efficiency and redundancy of a code
3.6. Absolute optimal codes
3.7. K-order extension of a source
3.7.1. Entropy of the 2nd order extension of a source [S]
3.7.2. Simple example of the interest of coding a source extension
3.8. Shannon’s first theorem
3.9. Design of optimal binary codes
3.9.1. Shannon-Fano coding
3.9.1.1. Example of Shannon–Fano binary coding
3.9.2. Huffman code
3.9.2.1. Example of binary Huffman code designing
4. Channel Coding for Disturbed Transmission Channels. 4.1. Introduction
4.2. Shannon’s second theorem (1948)
4.3. Error correction strategies
4.4. Classification of error detection codes or error correction codes
4.5. Definitions related to code performance
4.5.1. Efficiency
4.5.2. Weight of linear code or Hamming’s weight
4.5.3. Hamming distance
4.6. Form of the decision
4.6.1. Maximum a posteriori likelihood decoding
4.7. Linear group codes
4.7.1. Decoding ball concept: Hamming’s theorem
4.7.1.1. Ability to detect and correct errors
4.7.1.2. Hamming’s theorem
4.7.2. Generating matrix[G] and test matrix[H] 4.7.2.1. Recall on matrix: transposed matrix and product of two matrices
4.7.2.1.1. Transposed matrix
4.7.2.1.2. Properties
4.7.2.1.3. Product of 2 matrices
4.7.2.2. Generating matrix [G]
4.7.2.3. Test matrix [H]
4.7.3. Error detection and correction
4.7.4. Applications: Hamming codes (r = 1)
4.7.4.1, Detailed study
4.7.4.2. Syndrome calculation
4.7.5. Coding and decoding circuits. 4.7.5.1. Hamming encoder
4.7.5.2. Hamming decoder
4.7.6. Extension of Hamming codes
4.7.6.1. Extension of the Hamming encoder
4.7.6.2. Extension of the Hamming decoder
4.7.7. Relationships between columns of the matrix [H']
4.8. Cyclic codes. 4.8.1. Introduction
4.8.1.1. Recall of some useful results of algebra
4.8.2. Expression of a circular permutation
4.8.2.1. One-step left circular permutation
4.8.2.2. j-step left circular permutations
4.8.3. Generating polynomial g(x), generating matrix [G] and theorem of cyclic codes
4.8.3.1. Form of g(x)
4.8.3.2. Generating matrix [G]mn
4.8.3.3. Cyclic codes theorem
4.8.4. Dual code generated by h(x) and parity control matrix [H] 4.8.4.1. Dual code generated by h(x)
4.8.4.2. Matrix of parity control [H]
4.8.5. Construction of the codewords (coding)
4.8.5.1. Coding by division: systematic code
4.8.5.2. Coding and decoding circuits: intuitive method through an example
4.8.5.3. Transfer function of the division circuit: formal method
4.8.5.3.1. Pre-multiplied division circuit by xk, general case
4.8.5.4. Coding and decoding by division circuit, general case. 4.8.5.4.1. Coding
4.8.5.4.2. Decoding
4.9. Linear feedback shift register (LFSR) and its applications
4.9.1. Properties
4.9.2. Linear feedback shift register encoder and decoder (LFSR) 4.9.2.1. Linear feedback shift register encoder (LFSR)
4.9.2.2. Linear feedback shift register decoder (LFSR)
4.9.3. Coding by multiplication: non-systematic code
4.9.3.1. Practical transposition with an example
4.9.4. Detection of standard errors with cyclic codes
4.9.4.1. Detection of single error and odd number errors
4.9.4.2. Detection of double errors
4.9.4.3. Detection of single, double or triple errors
4.9.4.4. Error packet detection
4.9.5. Pseudo-random sequence generators: M-sequences, Gold, Kasami and Trivium
4.9.5.1. Generalities and applications
4.9.5.2. Maximum length sequences or M-sequences
4.9.5.3. Properties of M-sequences
4.9.5.3.1. Autocorrelation and cross-correlation functions
4.9.5.3.2. Cross-correlation function of two waveforms s(t) and q(t)
4.9.5.3.3. Discrete, normalized and periodic autocorrelation function
4.9.5.3.4. Discrete, normalized and periodic cross-correlation function
4.9.5.3.5. Confidentiality with M-sequences
4.9.5.4. Preferred pairs of M-sequences
4.9.5.4.1. Determining whether a pair of M-sequences forms a preferred pair
4.9.5.5. Gold sequence generator
4.9.5.5.1. Summary of the values of the cross-correlation of the M-sequences and the Gold sequences
4.9.5.6. Kasami sequence generator
4.9.5.6.1. Small Kasami set
4.9.5.6.2. Large Kasami set
4.9.5.7. Trivium non-linear generator
4.9.5.7.1. Need of an initial value (IV) to overcome known plain text attack
4.9.5.7.2. Internal structure of Trivium
Introduction to Part 2
5. Binary to M-ary Coding and M-ary to Signal Coding: On-line Codes. 5.1. Presentation and typology
5.2. Criteria for choosing an on-line code
5.3. Power spectral densities (PSD) of on-line codes
5.4. Description and spectral characterization of the main linear on-line codes with successive independent symbols
5.4.1. Binary NRZ code (non-return to zero): two-level code, two types of code. 5.4.1.1. NRZ-L code "level code"
5.4.1.2. NRZ-M code “mark” or differential
5.4.2. NRZ M-ary code
5.4.3. Binary RZ code (return to zero)
5.4.4. Polar RZ on-line code
5.4.5. Binary biphase on-line code (Manchester code)
5.4.6. Binary biphase mark or differential code (Manchester mark code)
5.5. Description and spectral characterization of the main on-line non-linear and non-alphabetic codes with successive dependent symbols
5.5.1. Miller’s code
5.5.2. Bipolar RZ code or AMI code (alternate marked inversion)
5.5.3. CMI code (code mark inversion)
5.5.4. HDB-n code (high density bipolar on-line code of order n)
5.6. Description and spectral characterization of partial response linear codes
5.6.1. Generation and interest of precoding
5.6.2. Structure of the coder and precoder. 5.6.2.1. Coder
5.6.2.2. Precoder
5.6.2.3. Combined precoder and encoder
5.6.3. Power spectral density of partial response linear on-line code
5.6.4. Most common partial response linear on-line codes
5.6.4.1. Duobinary code
5.6.4.2. NRZ bipolar code
5.6.4.3. Interleaved order 2 bipolar code
5.6.4.4. Biphase codes
5.6.4.5. Biphase code WAL1
5.6.4.6. Biphase code WAL2
6. Transmission of an M-ary Digital Signal on a Low-pass Channel. 6.1. Introduction
6.2. Digital systems and standardization for high data rate transmissions
6.3. Modeling the transmission of an M-ary digital signal through the communication chain
6.3.1. Equivalent energy bandwidth Δfe of a low-pass filter
6.4. Characterization of the intersymbol interference: eye pattern
6.4.1. Eye pattern
6.5. Probability of error Pe
6.5.1. Probability of error: case of binary symbols ak = ±1
6.5.1.1. Calculation of conditional probabilities of erroneous decision
6.5.1.2. Connection with the detection theory
6.5.1.3. Optimal decision threshold
6.5.1.4. Probability of error in the simple case of equiprobability of symbols ak
6.5.2. Probability of error: case of binary RZ code
6.5.2.1. Probability of error in the case of equiprobable symbols
6.5.3. Probability of error: general case of M-ary symbols
6.5.3.1. Decision rules (symbols ak: âk) 6.5.3.1.1. Case of intermediate values 2m+ 1, where and m ≠
6.5.3.1.2. Case of extreme values +(M - 1)
6.5.3.2. Calculation of the conditional probabilities of error in the case of M-ary symbols (M is even)
6.5.3.2.1. Case of intermediate values of decision
6.5.3.2.2. Case of extreme value of decision âk = (M – 1)
6.5.3.2.3. Case of the extreme value of decision âk = –(M – 1)
6.5.4. Probability of error: case of bipolar code
6.6. Conditions of absence of intersymbol interference: Nyquist criteria. 6.6.1. Nyquist temporal criterion
6.6.2. Nyquist frequency criterion
6.6.3. Interpretation of the Nyquist frequency criterion
6.7. Optimal distribution of filtering between transmission and reception
6.7.1. Expression of the minimum probability of error for a low-pass channel satisfying the Nyquist criterion
6.8. Transmission with a partial response linear coder
6.8.1. Transmission using the duobinary code
6.8.1.1. Demonstration of relation [6.168]
6.8.2. Transmission using 2nd order interleaved bipolar code
6.8.3. Reception of partial response linear codes
6.8.3.1. Case of duobinary coding
6.8.3.2. Example of duobinary coding and decoding
6.8.3.3. Case of 2nd order interleaved bipolar coding
6.8.4. Probability of error Pe
6.8.4.1. Calculation of conditional probabilities
7. Digital Transmissions with Carrier Modulation. 7.1. Introduction and schematic diagram of a digital radio transmission
7.2. Multiple access techniques and most common standards
7.3. Structure of a radio link, a satellite link and a mobile radio channel
7.3.1. Structure of a terrestrial link (one jump)
7.3.2. Structure of a satellite telecommunication link
7.3.3. Mobile radio channel
7.4. Effects of multiple paths and non-linearities of power amplifiers. 7.4.1. Effects of multiple paths: simple case of a direct path and only one delayed path
7.4.2. Effects of non-linearities of power amplifiers
7.5. Linear digital carrier modulations. 7.5.1. Principle
7.5.2. General characteristics of the modulated signal s(t)
7.5.2.1. Mean value
7.5.2.2. Autocorrelation function
7.5.2.3. Power spectral density
7.6. Quadrature digital linear modulations: general structure of the modulator, spatial diagram, constellation diagram and choice of a constellation
7.6.1. General structure of the modulator
7.6.2. Spatial diagram (or vectorial) and constellation diagram
7.6.3. Choosing a constellation diagram
7.7. Digital radio transmission and equivalent baseband digital transmission: complex envelope
7.7.1. Equivalent baseband digital transmission: complex envelope
7.8. Equivalent baseband transmission, interest and justification: analytical signal and complex envelope
7.8.1. Interest: important simplification in numerical simulation
7.8.2. Analytical signal and complex envelope of a modulated signal
7.9. Relationship between band-pass filter H and equivalent low-pass filter He
7.9.1. Probability of errors
7.10. M-ary Phase Shift Keying Modulation (M-PSK)
7.10.1. Binary Phase Shift Keying (BPSK) modulation and demodulation. 7.10.1.1. BPSK modulator
7.10.1.2. BPSK demodulator
7.10.1.3. Spectral occupancy of the BPSK signal
7.10.2. Quaternary Phase Shift Keying (QPSK) modulation and demodulation
7.10.2.1. Spectral occupancy of the QPSK signal
7.10.3. Differential M-PSK receiver
7.10.3.1. Differential BPSK coding and decoding
7.10.3.2. Differential QPSK coding and decoding
7.10.3.3. Differential QPSK decoder
7.10.4. Offset Quaternary Phase Shift Keying (OQPSK)
7.10.4.1. OQPSK modulator and demodulator
7.10.4.2. Spectral occupancy of the OQPSK signal
7.11. M-ary Quadrature Amplitude Modulation (M-QAM)
7.12. Detailed presentation of 16-QAM modulation and demodulation
7.12.1. Spectral occupancy of the 16-QAM modulated signal
7.13. Amplitude and Phase Shift Keying Modulation (APSK)
7.13.1. CIR (4, 4, 4, 4) modulation: 4 amplitudes, 4 phases
7.14. Detailed presentation of the 8-PSK modulation and demoludation
7.14.1. Differential coding and decoding of the 8-PSK modulation
7.14.2. Realization of the differential encoder and decoder: by Simulink simulation (MATLAB) and hardware implementation based on a ROM or EPROM memory. 7.14.2.1. Realization under Simulink
7.14.2.2. Implementation of the differential encoder and decoder by a ROM or EPROM memory
7.14.2.3. 8-PSK transmitter and receiver including differential encoder and decoder
7.14.2.4. Spectral occupancy of the 8-PSK signal
7.15. Performances of modulations in spectral occupancy and efficiency
References
Index
A
B
C
D
E
F, G
H
I
K, L
M
N
O
P
Q, R
S
T
U, W
WILEY END USER LICENSE AGREEMENT
Отрывок из книги
To all our families, far and wide
.....
[2.27]
Which can be written as:
.....