Читать книгу Matrix and Tensor Decompositions in Signal Processing - Gérard Favier - Страница 11

I.1. What are the advantages of tensor approaches?

Оглавление

In most applications, a tensor χ of order N is viewed as an array of real or complex numbers. The current element of the tensor is denoted xi1,… ,iN, where each index is associated with the nth mode, and In is its dimension, i.e. the number of elements for the nth mode. The order of the tensor is the number N of indices, i.e. the number of modes. Tensors are written with calligraphic letters1. An Nth-order tensor with entries is written where = ℝ or ℂ, depending on whether the tensor is real-valued or complex-valued, and I1 × · · · × IN represents the size of χ.

In general, a mode (also called a way) can have one of the following interpretations: (i) as a source of information (user, patient, client, trial, etc.); (ii) as a type of entity attached to the data (items/products, types of music, types of film, etc.); (iii) as a tag that characterizes an item, a piece of music, a film, etc.; (iv) as a recording modality that captures diversity in various domains (space, time, frequency, wavelength, polarization, color, etc.). Thus, a digital image in color can be represented as a three-dimensional tensor (of pixels) with two spatial modes, one for the rows (width) and one for the columns (height), and one channel mode (RGB colors). For example, a color image can be represented as a tensor of size 1024 × 768 × 3, where the third mode corresponds to the intensity of the three RGB colors (red, green, blue). For a volumetric image, there are three spatial modes (width × height × depth), and the points of the image are called voxels. In the context of hyperspectral imagery, in addition to the two spatial dimensions, there is a third dimension corresponding to the emission wavelength within a spectral band.

Tensor approaches benefit from the following advantages over matrix approaches:

 – the essential uniqueness property2, satisfied by some tensor decompositions, such as PARAFAC (parallel factors) (Harshman 1970) under certain mild conditions; for matrix decompositions, this property requires certain restrictive conditions on the factor matrices, such as orthogonality, non-negativity, or a specific structure (triangular, Vandermonde, Toeplitz, etc.);

 – the ability to solve certain problems, such as the identification of communication channels, directly from measured signals, without requiring the calculation of high-order statistics of these signals or the use of long pilot sequences. The resulting deterministic and semi-blind processings can be performed with signal recordings that are shorter than those required by statistical methods, based on the estimation of high-order moments or cumulants. For the blind source separation problem, tensor approaches can be used to tackle the case of underdetermined systems, i.e. systems with more sources than sensors;

 – the possibility of compressing big data sets via a data tensorization and the use of a tensor decomposition, in particular, a low multilinear rank approximation;

 – a greater flexibility in representing and processing multimodal data by considering the modalities separately, instead of stacking the corresponding data into a vector or a matrix. This allows the multilinear structure of data to be preserved, meaning that interactions between modes can be taken into account;

 – a greater number of modalities can be incorporated into tensor representations of data, meaning that more complementary information is available, which allows the performance of certain systems to be improved, e.g. wireless communication, recommendation, diagnostic, and monitoring systems, by making detection, interpretation, recognition, and classification operations easier and more efficient. This led to a generalization of certain matrix algorithms, like SVD (singular value decomposition) to MLSVD (multilinear SVD), also known as HOSVD (higher order SVD) (de Lathauwer et al. 2000a); similarly, certain signal processing algorithms were generalized, like PCA (principal component analysis) to MPCA (multilinear PCA) (Lu et al. 2008) or TRPCA (tensor robust PCA) (Lu et al. 2020) and ICA (independent component analysis) to MICA (multilinear ICA) (Vasilescu and Terzopoulos 2005) or tensor PICA (probabilistic ICA) (Beckmann and Smith 2005).

It is worth noting that, with a tensor model, the number of modalities considered in a problem can be increased either by increasing the order of the data tensor or by coupling tensor and/or matrix decompositions that share one or several modes. Such a coupling approach is called data fusion using a coupled tensor/matrix factorization. Two examples of this type of coupling are presented later in this introductory chapter. In the first, EEG signals are coupled with functional magnetic resonance imaging (fMRI) data to analyze the brain function; in the second, hyperspectral and multispectral images are merged for remote sensing.

The other approach, namely, increasing the number of modalities, will be illustrated in Volume 3 of this series by giving a unified presentation of various models of wireless communication systems designed using tensors. In order to improve system performance, both in terms of transmission and reception, the idea is to employ multiple types of diversity simultaneously in various domains (space, time, frequency, code, etc.), each type of diversity being associated with a mode of the tensor of received signals. Coupled tensor models will also be presented in the context of cooperative communication systems with relays.

Matrix and Tensor Decompositions in Signal Processing

Подняться наверх