Читать книгу Matrix and Tensor Decompositions in Signal Processing - Gérard Favier - Страница 12

I.2. For what uses?

Оглавление

In the big data3 era, digital information processing plays a key role in various fields of application. Each field has its own specificities and requires specialized, often multidisciplinary, skills to manage both the multimodality of the data and the processing techniques that need to be implemented. Thus, the “intelligent” information processing systems of the future will have to integrate representation tools, such as tensors and graphs, signal and image processing methods, with artificial intelligence techniques based on artificial neural networks and machine learning.

The needs of such systems are diverse and numerous – whether in terms of storage, visualization (3D representation, virtual reality, dissemination of works of art), transmission, imputation, prediction/forecasting, analysis, classification or fusion of multimodal and heterogeneous data. The reader is invited to refer to Lahat et al. (2015) and Papalexakis et al. (2016) for a presentation of various examples of data fusion and data mining based on tensor models.

Some of the key applications of tensor tools are as follows:

 – decomposition or separation of heterogeneous datasets into components/factors or subspaces with the goal of exploiting the multimodal structure of the data and extracting useful information for users from uncertain or noisy data or measurements provided by different sources of information and/or types of sensor. Thus, features can be extracted in different domains (spatial, temporal, frequential) for classification and decision-making tasks;

 – imputation of missing data within an incomplete database using a low-rank tensor model, where the missing data results from defective sensors or communication links, for example. This task is called tensor completion and is a higher order generalization of matrix completion (Candès and Recht 2009; Signoretto et al. 2011; Liu et al. 2013);

 – recovery of useful information from compressed data by reconstructing a signal or an image that has a sparse representation in a predefined basis, using compressive sampling (CS; also known as compressed sensing) techniques (Candès and Wakin 2008; Candès and Plan 2010), applied to sparse, low-rank tensors (Sidiropoulos and Kyrillidis 2012);

 – fusion of data using coupled tensor and matrix decompositions;

 – design of cooperative multi-antenna communication systems (also called MIMO (multiple-input multiple-output); this type of application, which led to the development of several new tensor models, will be considered in the next two volumes of this series;

 – multilinear compressive learning that combines compressed sensing with machine learning;

 – reduction of the dimensionality of multimodal, heterogeneous databases with very large dimensions (big data) by solving a low-rank tensor approximation problem;

 – multiway filtering and tensor data denoising.

Tensors can also be used to tensorize neural networks with fully connected layers by expressing the weight matrix of a layer as a tensor train (TT) whose cores represent the parameters of the layer. This considerably reduces the parametric complexity and, therefore, the storage space. This compression property of the information contained in layered neural networks when using tensor decompositions provides a way to increase the number of hidden units (Novikov et al. 2015). Tensors, when used together with multilayer perceptron neural networks to solve classification problems, achieve lower error rates with fewer parameters and less computation time than neural networks alone (Chien and Bao 2017). Neural networks can also be used to learn the rank of a tensor (Zhou et al. 2019), or to compute its eigenvalues and singular values, and hence the rank-one approximation of a tensor (Che et al. 2017).

Matrix and Tensor Decompositions in Signal Processing

Подняться наверх