Multimedia Security, Volume 1
Реклама. ООО «ЛитРес», ИНН: 7719571260.
Оглавление
William Puech. Multimedia Security, Volume 1
Table of Contents
List of Tables
List of Illustrations
Guide
Pages
Multimedia Security 1. Authentication and Data Hiding
Foreword by Gildas Avoine
Foreword by Cédric Richard
Preface
1. How to Reconstruct the History of a Digital Image, and of Its Alterations
1.1. Introduction. 1.1.1. General context
1.1.2. Criminal background
1.1.3. Issues for law enforcement
1.1.4. Current methods and tools of law enforcement
1.1.5. Outline of this chapter
1.2. Describing the image processing chain
1.2.1. Raw image acquisition
1.2.2. Demosaicing
1.2.3. Color correction
1.2.4. JPEG compression
1.3. Traces left on noise by image manipulation. 1.3.1. Non-parametric estimation of noise in images
1.3.2. Transformation of noise in the processing chain
1.3.2.1. Raw image acquisition
1.3.2.2. Demosaicing
1.3.2.3. Color correction
1.3.2.4. JPEG compression
1.3.3. Forgery detection through noise analysis
1.4. Demosaicing and its traces
1.4.1. Forgery detection through demosaicing analysis
1.4.2. Detecting the position of the Bayer matrix
1.4.2.1. Joint estimation of the sampled pixels and the demosaicing algorithm
1.4.2.2. Double demosaicing detection
1.4.2.3. Direct detection of the grid by intermediate values
1.4.2.4. Detecting the variance of the color difference
1.4.2.5. Detection by neural networks of the relative position of blocks
1.4.3. Limits of detection demosaicing
1.5. JPEG compression, its traces and the detection of its alterations
1.5.1. The JPEG compression algorithm
1.5.2. Grid detection
1.5.2.1. Compression artifacts
1.5.2.2. DCT coefficients
1.5.3. Detecting the quantization matrix
1.5.4. Beyond indicators, making decisions with a statistical model
1.6. Internal similarities and manipulations
1.7. Direct detection of image manipulation
1.8. Conclusion
1.9. References
2. Deep Neural Network Attacks and Defense: The Case of Image Classification
2.1. Introduction
2.1.1. A bit of history and vocabulary
2.1.2. Machine learning
2.1.3. The classification of images by deep neural networks
2.1.4. Deep Dreams
2.2. Adversarial images: definition
2.3. Attacks: making adversarial images
2.3.1. About white box. 2.3.1.1. The attacker’s objective function
2.3.1.2. Two big families
2.3.1.2.1. Distortion objective
2.3.1.2.2. Success target
2.3.1.3. Distortion objective: main attacks. 2.3.1.3.1. FGSM
2.3.1.3.2. I-FGSM
2.3.1.3.3. PGD2
2.3.1.3.4. M-IFGSM
2.3.1.4. Success goal: main attacks
2.3.1.4.1. L-BFGS
2.3.1.4.2. C&W
2.3.1.4.3. DDN
2.3.1.5. Other attacks
2.3.1.5.1. DeepFool
2.3.1.5.2. ILC
2.3.1.5.3. JSMA
2.3.1.5.4. Universal attacks
2.3.1.5.5. Geometric attacks
2.3.1.5.6. Generative Adversarial Network attacks
2.3.2. Black or gray box
2.3.2.1. Two concepts about the black box
2.3.2.2. Output = probability vector
2.3.2.3. Output = predicted class
2.4. Defenses
2.4.1. Reactive defenses
2.4.1.1. Learn about the manifold of natural images
2.4.1.2. Interaction with the classifier
2.4.2. Proactive defenses
2.4.2.1. Reducing the amplitude of gradients
2.4.2.2. Adversarial training
2.4.3. Obfuscation technique
2.4.4. Defenses: conclusion
2.5. Conclusion
2.6. References
3. Codes and Watermarks
3.1. Introduction
3.2. Study framework: robust watermarking
3.3. Index modulation
3.3.1. LQIM: insertion
3.3.2. LQIM: detection
3.4. Error-correcting codes approach
3.4.1. Generalities
3.4.2. Codes by concatenation
3.4.3. Hamming codes
3.4.4. BCH codes. 3.4.4.1. Cyclic block codes
3.4.4.2. An example of construction: BCH (15,7,5)
3.4.5. RS codes. 3.4.5.1. Principle of RS codes
3.4.5.2. Bounded distance decoding algorithms
3.4.5.3. Performance against packet errors of RS codes
3.5. Contradictory objectives of watermarking: the impact of codes
3.6. Latest developments in the use of correction codes for watermarking
3.7. Illustration of the influence of the type of code, according to the attacks
3.7.1. JPEG compression
3.7.2. Additive Gaussian noise
3.7.3. Saturation
3.8. Using the rank metric
3.8.1. Rank metric correcting codes
3.8.1.1. Definitions and properties
3.8.1.2. Rank distance
3.8.1.3. Principle of rank metric codes
3.8.1.4. Decoding Gabidulin codes
3.8.1.5. Introduction to rank metric in a watermarking strategy
3.8.2. Code by rank metric: a robust watermarking method for image cropping
3.8.2.1. Description of the “rank metric method against cropping”
3.8.2.2. Robustness of the rank metric against cropping
3.9. Conclusion
3.10. References
4. Invisibility
4.1. Introduction
4.2. Color watermarking: an approach history?
4.2.1. Vector quantization in the RGB space
4.2.2. Choosing a color direction
4.3. Quaternionic context for watermarking color images
4.3.1. Quaternions and color images
4.3.2. Quaternionic Fourier transforms
4.4. Psychovisual approach to color watermarking. 4.4.1. Neurogeometry and perception
4.4.2. Photoreceptor model and trichromatic vision
4.4.3. Model approximation
4.4.4. Parameters of the model
4.4.5. Application to watermarking color images
4.4.6. Conversions
4.4.7. Psychovisual algorithm for color images
4.4.8. Experimental validation of the psychovisual approach for color watermarking
4.4.8.1. Validation of invisibility
4.4.8.2. Impact on robustness. 4.4.8.2.1. Modification of luminance
4.4.8.2.2. Modification in the HSV space
4.5. Conclusion
4.6. References
5. Steganography: Embedding Data Into Multimedia Content
5.1. Introduction and theoretical foundations
5.2. Fundamental principles
5.2.1. Maximization of the size of the embedded message
5.2.2. Message encoding
5.2.3. Detectability minimization
5.3. Digital image steganography: basic methods
5.3.1. LSB substitution and matching
5.3.2. Adaptive embedding methods
5.3.2.1. Example of costs in the pixel domain (HILL scheme)
5.3.2.2. Cost taking into account detectability (MiPod scheme)
5.3.2.3. Example of costs in the JPEG domain (UERD scheme)
5.4. Advanced principles in steganography
5.4.1. Synchronization of modifications
5.4.2. Batch steganography
5.4.3. Steganography of color images
5.4.4. Use of side information
5.4.5. Steganography mimicking a statistical model
5.4.5.1. Distribution of DCT coefficients
5.4.5.2. Photonic noise
5.4.6. Adversarial steganography
5.4.6.1. Attacking steganography
5.4.6.2. Steganography through adversarial generator
5.5. Conclusion
5.6. References
6. Traitor Tracing
6.1. Introduction
6.1.1. The contribution of the cryptography community
6.1.2. Multimedia content
6.1.3. Error probabilities
6.1.4. Collusion strategy
6.2. The original Tardos code
6.2.1. Constructing the code
6.2.2. The collusion strategy and its impact on the pirated series
6.2.3. Accusation with a simple decoder
6.2.3.1. Robustness
6.2.3.2. Completeness
6.2.4. Study of the Tardos code-Škorić original
6.2.5. Advantages
6.2.5.1. Pedagogy
6.2.5.2. Latest developments
6.2.5.3. Worst case attack
6.2.6. The problems
6.2.6.1. Too restrictive
6.2.6.2. Ill-posed problem
6.2.6.3. Function score that is too constrained
6.3. Tardos and his successors
6.3.1. Length of the code
6.3.2. Other criteria. 6.3.2.1. Code length
6.3.2.2. Signal-to-noise ratio
6.3.2.3. Theoretical rates
6.3.3. Extensions
6.3.3.1. Binary codes
6.3.3.2. q-ary codes
6.3.3.3. Joint decoders
6.4. Research of better score functions
6.4.1. The optimal score function
6.4.2. The theory of the compound communication channel
6.4.3. Adaptive score functions
6.4.3.1. Estimation of the collusion strategy knowing c
6.4.3.1.1. Step E
6.4.3.1.2. Step M
6.4.3.2. Size of the unknown collusion
6.4.4. Comparison
6.5. How to find a better threshold
6.6. Conclusion
6.7. References
7. 3D Watermarking
7.1. Introduction
7.2. Preliminaries. 7.2.1. Digital watermarking
7.2.2. 3D objects
7.3. Synchronization
7.3.1. Traversal scheduling
7.3.2. Patch scheduling
7.3.3. Scheduling based on graphs
7.3.3.1. Spanning trees
7.3.3.2. Hamiltonian path
7.4. 3D data hiding
7.4.1. Transformed domains
7.4.2. Spatial domain
7.4.3. Other domains
7.5. Presentation of a high-capacity data hiding method
7.5.1. Embedding of the message
7.5.2. Causality issue
7.6. Improvements
7.6.1. Error-correcting codes
7.6.2. Statistical arithmetic coding
7.6.3. Partitioning and acceleration structures
7.7. Experimental results
7.8. Trends in high-capacity 3D data hiding. 7.8.1. Steganalysis
7.8.2. Security analysis
7.8.3. 3D printing
7.9. Conclusion
7.10. References
8. Steganalysis: Detection of Hidden Data in Multimedia Content
8.1. Introduction, challenges and constraints
8.1.1. The different aims of steganalysis
8.1.2. Different methods to carry out steganalysis
8.2. Incompatible signature detection
8.3. Detection using statistical methods
8.3.1. Statistical test of χ2
8.3.2. Likelihood-ratio test
8.3.3. LSB match detection
8.4. Supervised learning detection
8.4.1. Extraction of characteristics in the spatial domain. 8.4.1.1. SPAM characteristics
8.4.1.2. RM characteristics
8.4.1.3. Extraction of characteristics in the JPEG domain
8.4.2. Learning how to detect with features
8.5. Detection by deep neural networks
8.5.1. Foundation of a deep neural network
8.5.1.1. Global view of a CNN
8.5.2. The preprocessing module
8.5.2.1. The convolution module
8.5.2.2. The classification module
8.5.2.3. Using the modification probability map (Selection Channel-Aware) (SCA)
8.5.2.4. JPEG steganalysis
8.6. Current avenues of research
8.6.1. The problem of Cover-Source mismatch
8.6.2. The problem with steganalysis in real life
8.6.3. Reliable steganalysis
8.6.4. Steganalysis of color images
8.6.5. Taking into account the adaptivity of steganography
8.6.6. Grouped steganalysis (batch steganalysis)
8.6.7. Universal steganalysis
8.7. Conclusion
8.8. References
List of Authors
Index
A, B
C
D
E
F, G
H, I
J, L
M, N
Q, S
T, V, W
WILEY END USER LICENSE AGREEMENT
Отрывок из книги
Image, Field Director – Laure Blanc-Feraud
.....
These are methods based on the detectable traces left by compression. In their article, Minami and Zakhor propose a way of detecting JPEG grids with the aim of removing the blocking artifacts (Minami and Zakhor 1995). Later on, in Fan and de Queiroz (2003), the same ideas are used to decide whether an image has undergone JPEG compression, depending on whether traces are present or not. These methods use filters to bring out the traces of compression (Chen and Hsu 2008; (Li et al. 2009)). The simplest method calculates the absolute value of the gradient magnitude of the image (Lin et al. 2009), and others use the absolute value of derivatives of order 2 (Li et al. 2009). However, these two filters can have a strong response to edges and to textures present in the image and therefore can sometimes lead to faulty grid detections. To reduce the interference of details in the scene, a cross-difference filter, proposed by Chen and Hsu (2008), is more suitable. This filter, represented in Figure 1.10, amounts to calculating the absolute value of the result of a convolution of the image by a 2 × 2 kernel. The grid becomes visible because of the differentiating filter applied to the compressed image. The stronger the compression, the more this feature is present.
Recently, methods like the one proposed in Nikoukhah et al. (2020) have made these methods automatic and unsupervised thanks to statistical validation.
.....