Читать книгу Machine Learning for Tomographic Imaging - Professor Ge Wang - Страница 24

1.1.5 Data decorrelation and whitening

Оглавление

How can one represent natural images with their intrinsinc properties? One of the widely used methods in natural image statistics is principal component analysis (PCA). PCA considers the second-order statistics of natural images, i.e. the variances of and covariances among pixel values. Although PCA is not a sufficient model for the HVS, it is the foundation for the other models, and is usually applied as a pre-processing step for further analysis (Hyvärinen et al 2009). It can map original data into a set of linearly decorrelated representations of each dimension through linear transformation of the data, identifying the main linear components of the data.

During the linear transformation, we would like to make the transformed vectors as dispersed as possible. Mathematically, the degree of dispersion can be expressed in terms of variance. The variance of data provides information about the data. Therefore, by maximizing the variance, we can obtain the most information, and we define it as the first principle component of the data. After obtaining the first principal component, the next linear feature must be orthogonal to the first one and, more generally, a new linear feature should be made orthogonal to the existing ones. In this process, the covariance of vectors is used to represent their linear correlation. When the covariance equals zero, there is no correlation between the two vectors. The goal of PCA is to diagonalize the covariance matrix: that is, minimizing the amplitudes of the elements other than the diagonal ones, because diagonal values are the variances of the vector elements. Arranging the elements on the diagonal from top to bottom according to their amplitude, we can achieve PCA. In the following, we briefly introduce a realization of the PCA method.

Usually, before calculating PCA we remove the DC component in images (the first-order statistical information, often containing little structural information for natural images). Let X⊆Rn×m denote a sample matrix with DC removed, n be the data dimension, and m be the number of samples. Then, the covariance matrix can be computed as follows:

Σ=1mXX⊤.(1.11)

By singular value decomposition (SVD), the covariance matrix can be expressed as

Σ=USV,(1.12)

where U is an n × n unitary matrix, S is an n × n eigenvalue matrix, and V=U⊤ is also an n × n unitary matrix. The magnitude of the eigenvalues reflects the importance of the principal components. Arranging the eigenvalues from top to bottom in descending order, PCA can be realized with the following formula:

XPCA=U⊤X.(1.13)

Figure 1.8 depicts 64 weight matrices for Lena image patches of 8 × 8 pixels. The descending order of variance is from left to right along each row, and from top to bottom row-wise. PCA has been widely applied as a handy tool to compress data. In figure 1.9, we show a simple experiment of PCA compression. It can be seen that a natural image can be represented by a small number of components, relative to its original dimensionality. This means that some data redundancy in natural images can be removed by PCA.


Figure 1.8. 64 weighting matrices for Lena image patches of 8 × 8 pixels.


Figure 1.9. Image compressed with PCA. Lena image (©) Playboy Enterprises, Inc.

There is an important pre-processing step related to PCA, which is called whitening. It removes the first- and second-order information which, respectively, represent the average luminance and contrast information, and allows us to focus on higher-order statistical properties of the original data. Whitening is also a basic processing function of the retina and LGN cells (Atick and Redlich 1992). The data exhibit the following properties after the whitening operations: (i) the features are uncorrelated and (ii) all features have the same variance. In the patch-based whitening process, it is worth mentioning that the whitening process works well with PCA or other redundancy reduction methods. After PCA, the only thing we need to do for whitening data is to normalize the variances of the principal components. Thus, PCA with whitening can be expressed as follows:

XPCAwhite=S−12U⊤X,(1.14)

where S−12=diag(1λ1,…,1λn), λi is the eigenvalues.

After the whitening process, we have nullified the second-order information. That is, PCA with whitening remove the first- and second-order redundancy of data. Whitening, unlike PCA that is solely based on image patches, can also be performed by applying a filter.

Based on PCA, we can apply another component analysis algorithm called zero-phase component analysis, abbreviated as ZCA. ZCA is accomplished by transforming PCA data into the original data space:

XZCAwhite=UXPCAwhite,(1.15)

where U is the unitary matrix with the same definition as the SVD, UU⊤=I (also referred to as the ‘Mahalanobis transformation’). It can be shown that ZCA attempts to keep the transformed data as close to the original data as feasible. Hence, compared to PCA, data whitened by ZCA are more related to the original in terms of preserving structural information, except for luminance and contrast data. Figure 1.10 illustrates the global and local behaviors of PCA and ZCA, respectively. Since natural image features are mostly local, decorrelation or whitening filters can also be local. For natural images, high frequency features are commonly associated with small eigenvalues. The luminance and contrast components take up most of the energy of the image. In this context, ZCA is a simple yet effective way to highlight these structural features by removing the luminance and contrast components that account for little structural information in the image.


Figure 1.10. Basis functions obtained with PCA and ZCA, respectively. (a) PCA whitening basis functions, (b) ZCA whitening basis functions (with size 8 × 8), and (c) an enlarged view of a typical ZCA component in which significant variations happen around a specific spatial location.

In the HVS, the receptive field is tuned to a particular light pattern for a maximum response, which is achieved via local precessing. The receptive field of ganglion cells in the retina is a good example of a local filtering operation, so is the field of view of ganglion and LGN cells.

If the HVS had to transmit each pixel value to the brain separately, it would not be cost-effective. Fortunately, local neural processing yields a less redundant representation of an input image and then transmits the compressed code to the brain. According to the experimental results with natural images, the whitening filters for centralized receptive fields are circularly symmetric and similar to the LOG function, as shown in figure 1.3. Neurobiologists have verified that compared to the millions of photoreceptors in the retina, the numbers of ganglion and LGN cells are quite small, indicating a compression operation is performed on the original data.

Machine Learning for Tomographic Imaging

Подняться наверх