Читать книгу Graph Spectral Image Processing - Gene Cheung - Страница 3

List of Illustrations

Оглавление

1 Introduction to Graph Spectral Image ProcessingFigure I.1. Visual comparison of JPEG dequantization methods for a butterfly at QF = 5. The corresponding PSNR values are also given. For a color version of this figure, see www.iste.co.uk/cheung/graph.zip

2 Chapter 1Figure 1.1. Left: Star graph with N = 7. Right: Textured region of BarbaraFigure 1.2. Comparison of approximation errors in . For a color version of this figure, see www.iste.co.uk/cheung/graph.zipFigure 1.3. Image denoising example using bilateral filters. From left to right: Original, noisy (PSNR: 20.02 dB), bilateral filter in the pixel domain (PSNR: 26.23 dB), and bilateral filter in the graph frequency domain (PSNR: 27.14 dB). Both bilateral filters use the same parametersFigure 1.4. Framework of graph filter bank

3 Chapter 2Figure 2.1. The different choices of graph lead to different representations of the same signal. The signal forms a smooth representation on the graph (a) as its values vary slowly along the edges of the graph, and it mainly consists of low-frequency components in the graph spectral domain (b). The representation of the same signal is less smooth on the graph (c), which consists of both low and high frequency components in the graph spectral domain (d). Figure from (Dong et al. 2019) with permission. For a color version of this figure, see www.iste.co.uk/cheung/graph.zipFigure 2.2. Diffusion processes on the graph defined by a heat diffusion kernel (top right) and a graph shift operator (bottom right). Figure from (Dong et al. 2019) with permission. For a color version of this figure, see www.iste.co.uk/cheung/graph.zipFigure 2.3. (a) A graph signal. (b-e) Its decomposition in four localized simple components. Each component is a heat diffusion process (e−τL) at time τ that has started from different network nodes. The size and the color of each ball indicates the value of the signal at each vertex of the graph. Figure from Thanou et al. (2017) with permission. For a color version of this figure, see www.iste.co.uk/cheung/graph.zipFigure 2.4. A graph signal x at time step t is modeled as a linear combination of its observations in the past T time steps and a random noise process n[t]. Figure from Dong et al. (2019) with permission. For a color version of this figure, see www.iste.co.uk/cheung/graph.zipFigure 2.5. Inferring a graph for image coding. Figure from (Dong et al. 2019) with permission. For a color version of this figure, see www.iste.co.uk/cheung/graph.zip

4 Chapter 3Figure 3.1. Spatial graph-convolutional layer: an edge function weighs the node feature vectors, and an aggregation function merges them. For a color version of this figure, see www.iste.co.uk/cheung/graph.zipFigure 3.2. Edge-conditioned convolution. For a color version of this figure, see www.iste.co.uk/cheung/graph.zipFigure 3.3. Circulant approximation of a fully connected layer. On the left: A traditional parameter matrix without any structure. On the right: A stack of partial circulant matrices, where the weights in a row are shifted for a number of following rows before being replaced. For a color version of this figure, see www.iste.co.uk/cheung/graph.zipFigure 3.4. Low-rank approximation of the last layer of the network. For a color version of this figure, see www.iste.co.uk/cheung/graph.zip

5 Chapter 4Figure 4.1. Separable and non-separable transforms: (Left) For an N × N block, separable transforms are composed of two (possibly distinct) N ×N transforms, Urow and Ucol, applied to rows and columns of the block. (Right) Non-separable transforms apply a N2×N2 linear transformation using UFigure 4.2. Building blocks of a typical image/video encoder and decoder consisting of three main steps, which are (i) prediction, (ii) transformation, and (iii) quantization and entropy coding. This figure is adapted from Figure 1 in Egilmez et al. (2020a)Figure 4.3. Illustration of spectral transform design methodology. Traditional methods use a sample covariance to derive an approximation of KLT (sample KLT). This chapter focuses on approaches to find a graph Laplacian from data for designing GFTs (whose steps are represented by red arrows). For a color version of this figure, see www.iste.co.uk/cheung/graph.zipFigure 4.4. 2D GMRF models for intra- and inter-predicted signals. Filled vertices correspond to reference pixels obtained (a) from neighboring blocks and (b) from other frames via motion compensation. Unfilled vertices denote the pixels to be predicted and then transform coded. This figure is adapted from Figure 3 in Egilmez et al. (2020a)Figure 4.5. 1D GMRF models for (a) intra- and (b) inter-predicted signals. Black-filled vertices represent the reference pixels and unfilled vertices denote pixels to be predicted and then transform coded. This figure is adapted from Figure 2 in Egilmez et al. (2020a)Figure 4.6. An illustration of graph construction for a given 8 × 8 residual block signal, where wc = 1 and we = wc/sedge = 0.1 where sedge = 10. The resulting basis patches are also depicted. For a color version of this figure, see www.iste.co.uk/cheung/graph.zipFigure 4.7. A 1D graph-based model with an image edge at location l = j. All black colored edges have weights equal to wc, and the gray edge between vertices vj and vj+1 is weighted as we = wc/sedge. This figure is adapted from Figure 5 in Egilmez et al. (2020a)Figure 4.8. Coding gain (cg) versus sedge for block sizes with N = 4, 8, 16, 32, 64. EA-GFT provides better coding gain (i.e. cg is negative) when sedge is larger than 10 across different block sizes. This figure is adapted from Figure 6 in Egilmez et al. (2020a). For a color version of this figure, see www.iste.co.uk/cheung/graph.zipFigure 4.9. Coding gain (cg) versus bits per pixel (R/N) for different edge sharpness parameters sedge = 10, 20, 40, 100, 200. EA-GFT provides better coding gain (i.e. cg is negative) if sedge is larger than 10 for different block sizes. This figure is adapted from Figure 7 in Egilmez et al. (2020a). For a color version of this figure, see www.iste.co.uk/cheung/graph.zipFigure 4.10. BD-rates achieved for coding intra-predicted blocks with the RDOT scheme based on KLT, S-GFT and NS-GFT, which are trained on datasets with fewer number of samples. This figure is adapted from Figure 14 in Egilmez et al. (2020a). For a color version of this figure, see www.iste.co.uk/cheung/graph.zip

6 Chapter 5Figure 5.1. Example of a 3D point cloud: Stanford Bunny. For a color version of this figure, see www.iste.co.uk/cheung/graph.zipFigure 5.2. Example of omnidirectional image: Sopha (Hawary et al. 2020). For a color version of this figure, see www.iste.co.uk/cheung/graph.zipFigure 5.3. Example of light field image: Succulents (Jiang et al. 2017). For a color version of this figure, see www.iste.co.uk/cheung/graph.zipFigure 5.4. General scheme of graph spectral 3D image compressionFigure 5.5. Graph-based encoder’s principles, where x is the signal to encode, ŷ is the previously decoded data, z is the residue, α the transformed coefficients and their quantized versionFigure 5.6. Illustration of the difference between a compact and a not compact representation. For a color version of this figure, see www.iste.co.uk/cheung/graph.zipFigure 5.7. Graph topology for light fields and multi-view images. For a color version of this figure, see www.iste.co.uk/cheung/graph.zipFigure 5.8. Graph topology for point clouds. For a color version of this figure, see www.iste.co.uk/cheung/graph.zipFigure 5.9. Graph topology for omnidirectional images based on two different sampling approaches. For a color version of this figure, see www.iste.co.uk/cheung/graph.zipFigure 5.10. Example of a rate-distortion optimized graph partition in an omnidirectional image. Left: in the equirectangular domain; right: in the sphere domain. For a color version of this figure, see www.iste.co.uk/cheung/graph.zipFigure 5.11. Illustration of the output of the optimization process for a super-ray in four views of a light field. The first row corresponds to a super-ray across four views of the light field. The signal on the vertices corresponds to the color values lying on super-pixels corresponding to the same super-ray and the blue lines denote the correspondences based on the geometrical information in hand. The second to fourth rows are illustrations of basis functions before and after optimization. The signals on the vertices are the eigenvectors values. For a color version of this figure, see www.iste.co.uk/cheung/graph.zip

7 Chapter 6Figure 6.1. Several representative image restoration problems. The bottom-middle image is the original image. For a color version of this figure, see www.iste.co.uk/cheung/graph.zipFigure 6.2. A simple grid graph . This figure has only showed a part of the graph. For a color version of this figure, see www.iste.co.uk/cheung/graph.zipFigure 6.3. Denoising results of different approaches, with depth map Teddy corrupted by AWGN(σ=10). © 2013 IEEE. Reprinted, with permission, from Hu et al.(2013)Figure 6.4. Illustrations of different kinds of images.(a) A true natural image,(b) a blurry image,(c) a skeleton image, and(d),(e) and(f) are patches in the green squares of(a),(b) and(c), respectively. © 2018 IEEE. Reprinted, with permission, from Bai et al.(2019). For a color version of this figure, see www.iste.co.uk/cheung/graph.zipFigure 6.5. Edge weight distribution around image edges.(a) A true natural patch,(b) a blurry patch, and(c) a skeleton patch. © 2018 IEEE. Reprinted, with permission, from Bai et al.(2019). For a color version of this figure, see www.iste.co.uk/cheung/graph.zipFigure 6.6. Deblurring results comparison.(a) Blurry image.(b) Sun et al.(2013).(c) Michaeli and Irani(2014).(d) Lai et al.(2015).(e) Pan et al.(2016).(f) RGTV. The blur kernel is shown at the lower left corner. © 2018 IEEE. Reprinted, with permission, from Bai et al.(2019). For a color version of this figure, see www.iste.co.uk/cheung/graph.zipFigure 6.7. (a) A patch being optimized encloses a smaller code block. Boundary discontinuity is removed by averaging overlapping patches.(b) The relationship between the dictionary size and the restoration performance. © 2016 IEEE. Reprinted, with permission, from Liu et al.(2017). For a color version of this figure, see www.iste.co.uk/cheung/graph.zipFigure 6.8. Comparison with competing schemes on Butterfly at QF = 5. The corresponding PSNR and SSIM values are shown © 2016 IEEE. Reprinted, with permission, from Liu et al.(2017)Figure 6.9. Sampling the exemplar function fn at pixel locations in domain Ω. © 2017 IEEE. Reprinted, with permission, from Pang and Cheung(2017). For a color version of this figure, see www.iste.co.uk/cheung/graph.zipFigure 6.10. (a) –(c) Different scenarios of using the metric norm as a “pointwise” regularizer.(d) The ideal metric space. The red dots mark the ground-truth gradient. © 2017 IEEE. Reprinted, with permission, from Pang and Cheung(2017). For a color version of this figure, see www.iste.co.uk/cheung/graph.zipFigure 6.11. Denoising of the image Lena, where the original image is corrupted by AWGN with σI = 40. Two cropped fragments of each image are presented for comparison. © 2017 IEEE. Reprinted, with permission, from Pang and Cheung(2017)Figure 6.12. Inpainting of the image Peppers with the low-dimensional manifold model(LDMM), where the corrupted image only keeps 10% of the pixels from the original imageFigure 6.13. Results of real image denoising.(a) Noise clinic(model-based)(Lebrun et al. 2015);(b) CDnCNN(data-driven)(Zhang et al. 2017);(c) DeepGLR. CDnCNN and DeepGLR are trained for Gaussian denoising. ©2019 IEEE. Reprinted, with permission, from Zeng et al.(2019). For a color version of this figure, see www.iste.co.uk/cheung/graph.zipFigure 6.14. Block diagram of the proposed GLRNet that employs a graph Laplacian regularization layer for image denoising. © 2019 IEEE. Reprinted, with permission, from Zeng et al.(2019)Figure 6.15. Block diagram of the overall DeepGLR framework. © 2019 IEEE. Reprinted, with permission, from Zeng et al.(2019)Figure 6.16. DeepAGF framework. Top: Block diagram of the AGFNet, using analytical graph filter for image denoising. Bottom: Block diagram of the N stacks DeepAGF framework. © 2020 IEEE. Reprinted, with permission, from Su et al.(2020). For a color version of this figure, see www.iste.co.uk/cheung/graph.zipFigure 6.17. Denoising result comparison on the Starfish image with input noisy level σ = 70(a) Original,(b) DnCNN(Zhang et al. 2017) and(c) DeepAGF. DnCNN and DeepAGF are trained with noise level σ = 50. © 2020 IEEE. Reprinted, with permission, from Su et al.(2020). For a color version of this figure, see www.iste.co.uk/cheung/graph.zip

8 Chapter 7Figure 7.1. Examples of 3D point clouds. The 3D point cloud in plot shows (a) a 3D Bunny model obtained by range scanners with some postprocessing and (b) one LiDAR sweep directly collected by Velodyne HDL-64E for autonomous drivingFigure 7.2. Graph and graph signals for 3D point clouds. A K-nearest-neighbor graph is constructed to capture the pairwise spatial relationships among 3D points. The values of graph signals are reflected via color. For a color version of this figure, see www.iste.co.uk/cheung/graph.zipFigure 7.3. Low-pass approximation of the point cloud Bunny. Plot (a) is the original sampled point cloud with 1,797 points. Plots (b)–(d) show the low-pass approximations with 10, 100 and 1,000 graph frequenciesFigure 7.4. Graph filtering for 3D point clouds. Low-pass graph filtering smooths the sharp transition in a graph signal, while high-pass graph filtering highlights the sharp transition. For a color version of this figure, see www.iste.co.uk/cheung/graph.zipFigure 7.5. A synthetic noisy point cloud with Gaussian noise σ = 0.04 for Quasimoto and one denoised result: (a) the ground truth; (b) the noisy point cloud; (c) the denoised result by Hu et al. (2020a)Figure 7.6. Dual problems of 3D point cloud downsampling and upsampling. For a color version of this figure, see www.iste.co.uk/cheung/graph.zipFigure 7.7. Local variation based downsampling enhances the contour information (Chen et al. 2018). For a color version of this figure, see Figure 7.8. Two auto-encoder frameworks for 3D point clouds. For a color version of this figure, see Figure 7.9. Graph topology learning and filtering improves the reconstruction of a 3D point cloud (Chen et al. 2019)Figure 7.10. An illustration of the GraphTER model for unsupervised feature learning (Gao et al. 2020). For a color version of this figure, see Figure 7.11. The architecture of the unsupervised feature learning in GraphTER. The representation encoder and transformation decoder are jointly trained by minimizing equation [7.33]. For a color version of this figure, see Figure 7.12. Visual comparison (Gao et al. 2020) of point cloud segmentation between GraphTER and MAP-VAE. For a color version of this figure, see

9 Chapter 8Figure 8.1. An example of the matrix W. For a color version of this figure, see www.iste.co.uk/cheung/graph.zipFigure 8.2. (a) The original 480 × 640 image with initial scribbles for three regions (blue, red and green). (b)–(d) The regions viewed against a uniform blue background, respectively. For a color version of this figure, see www.iste.co.uk/cheung/graph.zipFigure 8.3. An example of multiple images: two images of cells of benign and malignant types. For a color version of this figure, see www.iste.co.uk/cheung/graph.zipFigure 8.4. A one-dimensional example of five grid points for three regions segmentation (blue circle: image pixel; blue arrow : an edge between two grid point vertices; brown arrow: an edge from the source vertex to a grid point vertex; green arrow: an edge from a grid point vertex to the sink vertex; red arrow: an edge from one region to another region. For a color version of this figure, see www.iste.co.uk/cheung/graph.zip

10 Chapter 9Figure 9.1. Classification error rate (%) as a function of node degree k for the two datasets. For a color version of this figure, see www.iste.co.uk/cheung/graph.zipFigure 9.2. Classification error rate (%) as a function of labeling ratio for the two datasets. For a color version of this figure, see www.iste.co.uk/cheung/graph.zipFigure 9.3. Classification error rate (%) as a function of γ for the two datasets.Figure 9.4. Classification error rate (%) as a function of μ for the two datasets. For a color version of this figure, see www.iste.co.uk/cheung/graph.zipFigure 9.5. Classification error rate (%) as a function of label noise and the threshold τ for (a) MNIST dataset and (b) CIFAR10 dataset. We show classification error rate in color space as improvement when using two iterations, relative to the case with only one iteration; thus, a negative classification error rate means there is improvement due to the second iteration. For a color version of this figure, see www.iste.co.uk/cheung/graph.zipFigure 9.6. Classification error rate (%) as a function of label noise for the two datasets. For a color version of this figure, see www.iste.co.uk/cheung/graph.zipFigure 9.7. The block diagram of the unweighted graph generation scheme. V0(·) is a feature map function that reflects the node-to-node correlation. For a color version of this figure, see www.iste.co.uk/cheung/graph.zipFigure 9.8. The graph-based classifier and graph update scheme. The green and blue colors denote input and output, respectively. For a color version of this figure, see www.iste.co.uk/cheung/graph.zipFigure 9.9. The overall block diagram of the DynGLR-Net for r = 2. Given observations X, G-Net first learns an initial undirected and unweighted k-NN-graph by minimizing LossE. The resulting edge matrix E0 is used in the GLR iteration. The learnt shallow feature map f1(X) = {X, ZD(X)} is then used as an input to learn a CNNC1 network for assigning weights to the initial graph edges. Given a subset of potentially noisy labels, ˙Y, GLR is performed on the constructed undirected and weighted graph to restore the labels. The resulting restored labels are used in the following GLR iterations.Figure 9.10. CNNCr neural nets for CIFAR10 dataset: “pool/q/w” refers to a max-pooling layer with q = pool size and w = stride size. “x conv y/z” refers to a 1D convolutional layer with y filters, each with kernel size x and stride size z. “fc x” refers to the fully connected layer with x = number of neurons. “reshape x” refers to a reshape layer to transform the size of the input to x. “avg” refers to the global average-pooling layer (Lin et al. 2013). For a color version of this figure, see www.iste.co.uk/cheung/graph.zipFigure 9.11. CNNHU neural nets for the CIFAR10 dataset. For a color version of this figure, see www.iste.co.uk/cheung/graph.zipFigure 9.12. The magnitude of the GFT coefficients for the CIFAR10 dataset using sufficient data under 30% label noise level. The density of each eigenvalue λ across all experiments on the testing sets is represented through color maps. The top row shows the result after initialization and before GLR (G-Net output) and the second and third row show the result after the first (r = 1) and the second iteration (r = 2), respectively. For a color version of this figure, see www.iste.co.uk/cheung/graph.zip

11 Chapter 10Figure 10.1. Point cloud classification network (Simonovsky and Komodakis 2017). The network outputs the classification score vector y ∈ Rc, where c is the number of classes. GCONV: graph convolution operation; BNORM: batch normalization; FC: fully connected layer. For a color version of this figure, see www.iste.co.uk/cheung/graph.zipFigure 10.2. Point cloud segmentation network (Wang et al. 2019). The network outputs the per-point classification scores Y ∈ RN×p for p semantic labels. ⊕: concatenation. For a color version of this figure, see www.iste.co.uk/cheung/graph.zipFigure 10.3. Part segmentation results for chairs, lamps and tables. Figure from (Wang et al. 2019). For a color version of this figure, see www.iste.co.uk/cheung/graph.zipFigure 10.4. Image denoising network. Figure from (Valsesia et al. 2019a). For a color version of this figure, see www.iste.co.uk/cheung/graph.zipFigure 10.5. Graph-convolutional layer. Figure from (Valsesia et al. 2019a). For a color version of this figure, see www.iste.co.uk/cheung/graph.zipFigure 10.6. Extract from Urban100 scene 13, σ = 25. Left to right: ground truth, noisy (20.16 dB), BM3D (30.40 dB), DnCNN (30.71 dB), NLRN (31.41 dB), GCNN (31.53 dB). Figure from (Valsesia et al. 2019a)Figure 10.7. Receptive field (green) of a single pixel (red, circled in purple) for the three graph-convolutional layers in the LPF1 block with respect to the input of the first graph-convolutional layer in the block. Top row: gray pixel on an edge. Bottom row: white pixel in a uniform area. Figure from (Valsesia et al. 2019a). For a color version of this figure, see www.iste.co.uk/cheung/graph.zipFigure 10.8. Generative adversarial network. A generator G maps a latent vector z into a sample ĵ of the data distribution. The discriminator D can be interpreted as measuring the optimal transport cost between the true data and the generated data distributions. For a color version of this figure, see www.iste.co.uk/cheung/graph.zipFigure 10.9. Graph-convolutional GAN generator. The graph-convolutional and upsampling layers use a k nearest neighbor graph computed from the feature vectors at the input of the layer. For a color version of this figure, see www.iste.co.uk/cheung/graph.zipFigure 10.10. Generated point clouds. For a color version of this figure, see www.iste.co.uk/cheung/graph.zipFigure 10.11. Graph-convolutional variational autoencoder for shape completion. Figure from (Litany et al. 2018). For a color version of this figure, see www.iste.co.uk/cheung/graph.zipFigure 10.12. Examples of completed shapes. Figure from (Litany et al. 2018). For a color version of this figure, see www.iste.co.uk/cheung/graph.zip

Graph Spectral Image Processing

Подняться наверх