Graph Spectral Image Processing
Реклама. ООО «ЛитРес», ИНН: 7719571260.
Оглавление
Gene Cheung. Graph Spectral Image Processing
Table of Contents
List of Illustrations
List of Tables
Guide
Pages
Graph Spectral Image Processing
Introduction to Graph Spectral Image Processing
I.1. Introduction
I.2. Graph definition
I.3. Graph spectrum
I.4. Graph variation operators
I.5. Graph signal smoothness priors
I.6. References
1. Graph Spectral Filtering
1.1. Introduction
1.2. Review: filtering of time-domain signals
1.3. Filtering of graph signals
1.3.1. Vertex domain filtering
1.3.2. Spectral domain filtering
1.3.3. Relationship between graph spectral filtering and classical filtering
1.4. Edge-preserving smoothing of images as graph spectral filters
1.4.1. Early works
1.4.2. Edge-preserving smoothing
1.5. Multiple graph filters: graph filter banks
1.5.1. Framework
1.5.2. Perfect reconstruction condition
1.5.2.1. Design of perfect reconstruction transforms: undecimated case
1.5.2.2. Design of perfect reconstruction transforms: decimated case
1.6. Fast computation
1.6.1. Subdivision
1.6.2. Downsampling
1.6.3. Precomputing GFT
1.6.4. Partial eigendecomposition
1.6.5. Polynomial approximation
1.6.6. Krylov subspace method
1.7. Conclusion
1.8. References
2. Graph Learning
2.1. Introduction
2.2. Literature review
2.2.1. Statistical models
2.2.2. Physically motivated models
2.3. Graph learning: a signal representation perspective
2.3.1. Models based on signal smoothness
2.3.2. Models based on spectral filtering of graph signals
2.3.2.1. Stationarity-based learning frameworks
2.3.2.2. Graph dictionary-based learning frameworks
2.3.3. Models based on causal dependencies on graphs
2.3.4. Connections with the broader literature
2.4. Applications of graph learning in image processing
2.5. Concluding remarks and future directions
2.6. References
3. Graph Neural Networks
3.1. Introduction
3.2. Spectral graph-convolutional layers
3.3. Spatial graph-convolutional layers
3.4. Concluding remarks
3.5. References
4. Graph Spectral Image and Video Compression
4.1. Introduction
4.1.2.Literature review
4.1.3.Outline of the chapter
4.2. Graph-based models for image and video signals
4.2.1.Graph-based models for residuals of predicted signals
4.2.1.1. A general model for residual signals
4.2.1.2. 1D line models for residual signals
4.2.2.DCT/DSTs as GFTs and their relation to 1D models
4.2.3.Interpretation of graph weights for predictive transform coding
4.3. Graph spectral methods for compression. 4.3.1.GL-GFT design. 4.3.1.1. Generalized graph Laplacian estimation
4.3.1.2. GL-GFT construction
4.3.1.3. Theoretical justifications for graph learning from data
4.3.2.EA-GFT design. 4.3.2.1. EA-GFT construction
4.3.2.2. Theoretical justifications for EA-GFT
4.3.3.Empirical evaluation of GL-GFT and EA-GFT. 4.3.3.1. Experimental setup
4.3.3.2. Compression results
4.4. Conclusion and potential future work
4.5. References
5. Graph Spectral 3D Image Compression
5.1. Introduction to 3D images. 5.1.1. 3D image definition
5.1.2. Point clouds and meshes
5.1.3. Omnidirectional images
5.1.4. Light field images
5.1.5. Stereo/multi-view images
5.2. Graph-based 3D image coding: overview
5.3. Graph construction
5.3.1. Geometry-based approaches
5.3.2.Joint geometry and color-based approaches
5.3.2.1. Segmenting the graphs
5.3.2.2. Learning the graph
5.3.3. Separable transforms
5.4. Concluding remarks
5.5. References
6. Graph Spectral Image Restoration
6.1. Introduction
6.1.1. A simple image degradation model
6.1.2. Restoration with signal priors
6.1.3. Restoration via filtering
6.1.4. GSP for image restoration
6.2. Discrete-domain methods
6.2.1. Non-local graph-based transform for depth image denoising
6.2.1.1. Non-local graph-based transform
6.2.1.2. Algorithm implementation and performance demonstration
6.2.2. Doubly stochastic graph Laplacian
6.2.2.1. Doubly stochastic Laplacian and spectral interpretation
6.2.2.2. Algorithm implementation and performance comparisons
6.2.3. Reweighted graph total variation prior
6.2.3.1. Kernel estimation from skeleton image
6.2.3.2. Reweighted graph total variation and spectral analysis
6.2.3.3. Algorithm implementation
6.2.3.4. Performance demonstration
6.2.4. Left eigenvectors of random walk graph Laplacian
6.2.4.1. JPEG soft decoding
6.2.4.2. LERaG prior
6.2.4.3. Property of LERaG
6.2.4.4. Algorithm implementation
6.2.4.5. Performance demonstration
6.2.5. Graph-based image filtering
6.3. Continuous-domain methods
6.3.1. Continuous-domain analysis of graph Laplacian regularization
6.3.1.1. Interpreting the graph Laplacian regularizer
6.3.1.2. Optimal graph for image denoising
6.3.1.3. Experimentation
6.3.2. Low-dimensional manifold model for image restoration
6.3.2.1. The low-dimensional manifold model
6.3.3. LDMM as graph Laplacian regularization
6.4. Learning-based methods
6.4.1. CNN with GLR
6.4.1.1. Combining advantages of GLR and CNN
6.4.1.2. DeepGLR Framework
6.4.1.3. Result demonstration
6.4.2. CNN with graph wavelet filter
6.4.2.1. DeepAGF framework
6.4.2.2. Result demonstration
6.5. Concluding remarks
6.6. References
7. Graph Spectral Point Cloud Processing
7.1. Introduction
7.2. Graph and graph-signals in point cloud processing
7.3. Graph spectral methodologies for point cloud processing
7.3.1. Spectral-domain graph filtering for point clouds
7.3.2. Nodal-domain graph filtering for point clouds
7.3.3. Learning-based graph spectral methods for point clouds
7.4. Low-level point cloud processing
7.4.1.Point cloud denoising
7.4.2. Point cloud resampling
7.4.2.1. Downsampling
7.4.2.2. Upsampling
7.4.3. Datasets and evaluation metrics
7.5. High-level point cloud understanding
7.5.1. Data auto-encoding for point clouds
7.5.1.1. Encoder
7.5.1.2. Decoder
7.5.1.3. Loss function
7.5.2. Transformation auto-encoding for point clouds
7.5.2.1. Graph signal transformation
7.5.2.3. The algorithm of GraphTER
7.5.3. Applications of GraphTER in point clouds
7.5.4. Datasets and evaluation metrics
7.6. Summary and further reading
7.7. References
8. Graph Spectral Image Segmentation
8.1. Introduction
8.2. Pixel membership functions
8.2.1. Two-class problems
8.2.2. Multiple-class problems
8.2.3. Multiple images
8.3. Matrix properties
8.4. Graph cuts
8.4.1. The Mumford–Shah model
8.4.2. Graph cuts minimization
8.5. Summary
8.6. References
9. Graph Spectral Image Classification
9.1. Formulation of graph-based classification problems
9.1.1. Graph spectral classifiers with noiseless labels
9.1.2. Graph spectral classifiers with noisy labels
9.2. Toward practical graph classifier implementation
9.2.1. Graph construction
9.2.2. Experimental setup and analysis
9.2.2.1. Noiseless labels
9.2.2.2. Noisy labels
9.3. Feature learning via deep neural network
9.3.1. Deep feature learning for graph construction
9.3.2. Iterative graph construction
9.3.3. Toward practical implementation of deep feature learning
9.3.4. Analysis on iterative graph construction for robust classification
9.3.5. Graph spectrum visualization
9.3.6. Classification error rate comparison using insufficient training data
9.3.7. Classification error rate comparison using sufficient training data with label noise
9.4. Conclusion
9.5. References
10. Graph Neural Networks for Image Processing
10.1. Introduction
10.2. Supervised learning problems
10.2.1. Point cloud classification
10.2.2. Point cloud segmentation
10.2.3. Image denoising
10.3. Generative models for point clouds
10.3.1. Point cloud generation
10.3.2. Shape completion
10.4. Concluding remarks
10.5. References
List of Authors
Index
B, C
D, E
F
G
I
L, M, O
P
R, S
T, V
WILEY END USER LICENSE AGREEMENT
Отрывок из книги
Image, Field Director – Laure Blanc-Feraud
.....
There are various methods available for designing perfect reconstruction graph transforms. First, let us consider undecimated transforms that exhibit symmetrical structure.
An undecimated transform has no sampling, i.e. Sk = IN for all k. Therefore, the analysis and synthesis transforms, respectively, are represented in the following simple forms:
.....