Matrix and Tensor Decompositions in Signal Processing

Matrix and Tensor Decompositions in Signal Processing
Автор книги: id книги: 2135439     Оценка: 0.0     Голосов: 0     Отзывы, комментарии: 0 16715,1 руб.     (181,66$) Читать книгу Купить и скачать книгу Электронная книга Жанр: Программы Правообладатель и/или издательство: John Wiley & Sons Limited Дата добавления в каталог КнигаЛит: ISBN: 9781119700968 Скачать фрагмент в формате   fb2   fb2.zip Возрастное ограничение: 0+ Оглавление Отрывок из книги

Реклама. ООО «ЛитРес», ИНН: 7719571260.

Описание книги

The second volume will deal with a presentation of the main matrix and tensor decompositions and their properties of uniqueness, as well as very useful tensor networks for the analysis of massive data. Parametric estimation algorithms will be presented for the identification of the main tensor decompositions. After a brief historical review of the compressed sampling methods, an overview of the main methods of retrieving matrices and tensors with missing data will be performed under the low rank hypothesis. Illustrative examples will be provided.

Оглавление

Gérard Favier. Matrix and Tensor Decompositions in Signal Processing

Table of Contents

List of Illustrations

List of Tables

Guide

Pages

Matrix and Tensor Decompositions in Signal Processing

Introduction

I.1. What are the advantages of tensor approaches?

I.2. For what uses?

I.3. In what fields of application?

I.4. With what tensor decompositions?

I.5. With what cost functions and optimization algorithms?

I.6. Brief description of content

1. Matrix Decompositions. 1.1. Introduction

1.2. Overview of the most common matrix decompositions

1.3. Eigenvalue decomposition. 1.3.1. Reminders about the eigenvalues of a matrix

1.3.2. Eigendecomposition and properties

1.3.3. Special case of symmetric/Hermitian matrices

1.3.4. Application to compute the powers of a matrix and a matrix polynomial

1.3.5. Application to compute a state transition matrix

1.3.6. Application to compute the transfer function and the output of a discrete-time linear system

1.4. URVH decomposition

1.5. Singular value decomposition. 1.5.1. Definition and properties

1.5.2. Reduced SVD and dyadic decomposition

1.5.3. SVD and fundamental subspaces associated with a matrix

1.5.4. SVD and the Moore–Penrose pseudo-inverse

1.5.5. SVD computation

1.5.6. SVD and matrix norms

1.5.7. SVD and low-rank matrix approximation

1.5.8. SVD and orthogonal projectors

1.5.9. SVD and LS estimator

Case of a rank-deficient matrix

Case of an ill-conditioned matrix and regularized LS solution

1.5.10. SVD and polar decomposition

Case of a full-rank rectangular matrix

1.5.11. SVD and PCA

1.5.11.1. Principle of the method

1.5.11.2. PCA and variance maximization

1.5.11.3. PCA and dimensionality reduction

1.5.11.4. PCA of data

1.5.11.5. PCA algorithm

1.5.12. SVD and blind source separation

1.5.12.1. Noiseless case

1.5.12.2. Noisy case

1.6. CUR decomposition

2. Hadamard, Kronecker and Khatri–Rao Products. 2.1. Introduction

2.2. Notation

2.3. Hadamard product. 2.3.1. Definition and identities

2.3.2. Fundamental properties

2.3.3. Basic relations

2.3.4. Relations between the diag operator and Hadamard product

2.4. Kronecker product. 2.4.1. Kronecker product of vectors. 2.4.1.1. Definition

2.4.1.2. Fundamental properties

2.4.1.3. Basic relations

2.4.1.4. Rank-one matrices and Kronecker products of vectors

2.4.1.5. Vectorization of a rank-one matrix

2.4.2. Kronecker product of matrices. 2.4.2.1. Definitions and identities

2.4.2.2. Fundamental properties

2.4.2.3. Multiple Kronecker product

2.4.2.4. Interpretation as a tensor product

2.4.2.5. Identities involving matrix-vector Kronecker products

2.4.3. Rank, trace, determinant and spectrum of a Kronecker product

2.4.4. Structural properties of a Kronecker product

2.4.5. Inverse and Moore–Penrose pseudo-inverse of a Kronecker product

2.4.6. Decompositions of a Kronecker product

2.5. Kronecker sum

2.5.1. Definition

2.5.2. Properties

2.6. Index convention

2.6.1. Writing vectors and matrices with the index convention

2.6.2. Basic rules and identities with the index convention

2.6.3. Matrix products and index convention

2.6.4. Kronecker products and index convention

2.6.5. Vectorization and index convention

2.6.6. Vectorization formulae

2.6.7. Vectorization of partitioned matrices

2.6.8. Traces of matrix products and index convention

2.7. Commutation matrices

2.7.1. Definition

2.7.2. Properties

2.7.3. Kronecker product and permutation of factors

2.7.4. Multiple Kronecker product and commutation matrices

2.7.5. Block Kronecker product

2.7.6. Strong Kronecker product

2.8. Relations between the diag operator and the Kronecker product

2.9. Khatri–Rao product

2.9.1. Definition

2.9.2. Khatri–Rao product and index convention

2.9.3. Multiple Khatri–Rao product

2.9.4. Properties

2.9.5. Identities

2.9.6. Khatri–Rao product and permutation of factors

2.9.7. Trace of a product of matrices and Khatri–Rao product

2.10. Relations between vectorization and Kronecker and Khatri–Rao products

2.11. Relations between the Kronecker, Khatri–Rao and Hadamard products

2.12. Applications

2.12.1. Partial derivatives and index convention

2.12.2. Solving matrix equations

2.12.2.1. Continuous-time Sylvester and Lyapunov equations

2.12.2.2. Discrete-time Sylvester and Lyapunov equations

2.12.2.3. Equations of the formY = AXBT

2.12.2.4. Equations of the formY = (B A)X

2.12.2.5. Estimation of the factors of a Khatri–Rao product

2.12.2.6. Estimation of the factors of a Kronecker product

3. Tensor Operations. 3.1. Introduction

3.2. Notation and particular sets of tensors

3.3. Notion of slice

3.3.1. Fibers

3.3.2. Matrix and tensor slices

3.4. Mode combination

3.5. Partitioned tensors or block tensors

3.6. Diagonal tensors

3.6.1. Case of a tensor

3.6.2. Case of a square tensor

3.6.3. Case of a rectangular tensor

3.7. Matricization

3.7.1. Matricization of a third-order tensor

3.7.2. Matrix unfoldings and index convention

3.7.3. Matricization of a tensor of order N

3.7.4. Tensor matricization by index blocks

3.8. Subspaces associated with a tensor and multilinear rank

3.9. Vectorization

3.9.1. Vectorization of a tensor of order N

3.9.2. Vectorization of a third-order tensor

3.10. Transposition. 3.10.1. Definition of a transpose tensor

3.10.2. Properties of transpose tensors

3.10.3. Transposition and tensor contraction

3.11. Symmetric/partially symmetric tensors. 3.11.1. Symmetric tensors

3.11.2. Partially symmetric/Hermitian tensors

3.11.3. Multilinear forms with Hermitian symmetry and Hermitian tensors

3.11.4. Symmetrization of a tensor

3.12. Triangular tensors

3.13. Multiplication operations

3.13.1. Outer product of tensors

3.13.2. Tensor-matrix multiplication. 3.13.2.1. Definition of the mode-pproduct

3.13.2.2. Properties

3.13.3. Tensor–vector multiplication. 3.13.3.1. Definition

3.13.3.2. Case of a third-order tensor

3.13.4. Mode-(p, n) product

3.13.5. Einstein product

3.13.5.1. Definition and properties

3.13.5.2. Orthogonality and idempotence properties

3.13.5.3. Isomorphism of tensor and matrix groups

3.14. Inverse and pseudo-inverse tensors

3.15. Tensor decompositions in the form of factorizations

3.15.1. Eigendecomposition of a symmetric square tensor

3.15.2. SVD decomposition of a rectangular tensor

3.15.3. Connection between SVD and HOSVD

3.15.4. Full-rank decomposition

3.16. Inner product, Frobenius norm and trace of a tensor. 3.16.1. Inner product of two tensors

3.16.2. Frobenius norm of a tensor

3.16.3. Trace of a tensor

3.17. Tensor systems and homogeneous polynomials

3.17.1. Multilinear systems based on the mode-n product

3.17.2. Tensor systems based on the Einstein product

3.17.3. Solving tensor systems using LS

3.18. Hadamard and Kronecker products of tensors

3.19. Tensor extension

3.20. Tensorization

3.21. Hankelization

4. Eigenvalues and Singular Values of a Tensor. 4.1. Introduction

4.2. Eigenvalues of a tensor of order greater than two

4.2.1. Different definitions of the eigenvalues of a tensor

4.2.2. Positive/negative (semi-)definite tensors

4.2.3. Orthogonally/unitarily similar tensors

4.3. Best rank-one approximation

4.4. Orthogonal decompositions

4.5. Singular values of a tensor

5. Tensor Decompositions. 5.1. Introduction

5.2. Tensor models

5.2.1. Tucker model. 5.2.1.1. Definition in scalar form

5.2.1.2. Expression in terms of mode-nproducts

5.2.1.3. Expression in terms of outer products

5.2.1.4. Matricization

5.2.1.5. Vectorization

5.2.1.6. Case of a third-order tensor

5.2.1.7. Uniqueness

5.2.1.8. HOSVD algorithm

5.2.2. Tucker-(N1, N) model

5.2.3. Tucker model of a transpose tensor

5.2.4. Tucker decomposition and multidimensional Fourier transform

5.2.5. PARAFAC model. 5.2.5.1. Scalar expression

5.2.5.2. Other expressions

5.2.5.3. Matricization

5.2.5.4. Vectorization

5.2.5.5. Normalized form

5.2.5.6. Case of a third-order tensor

5.2.5.7. ALS algorithm for estimating a PARAFAC model

5.2.5.8. Estimation of a third-order rank-one tensor

5.2.5.9. Case of a fourth-order tensor

5.2.5.10. Variants of the PARAFAC model

5.2.5.11. PARAFAC model of a transpose tensor

5.2.5.12. Uniqueness and identifiability

5.2.6. Block tensor models

5.2.7. Constrained tensor models. 5.2.7.1. Interpretation and use of constraints

5.2.7.2. Constrained PARAFAC models

5.2.7.3. BTD model

5.3. Examples of tensor models

5.3.1. Model of multidimensional harmonics

5.3.2. Source separation. 5.3.2.1. Instantaneous mixture modeled using the received signals

5.3.2.2. Instantaneous mixture modeled using cumulants of the received signals

5.3.3. Model of a FIR system using fourth-order output cumulants

Appendix. Random Variables and Stochastic Processes. A1.1. Introduction

A1.2. Random variables

A1.2.1 Real scalar random variables. A1.2.1.1 Distribution function and probability density

A1.2.1.2 Moments and central moments

A1.2.1.3 Independence, non-correlation and orthogonality

A1.2.1.4 Ensemble averages and empirical averages

A1.2.1.5 Characteristic functions, moments and cumulants

A1.2.2 Real multidimensional random variables. A1.2.2.1 Second-order statistics

A1.2.2.2 Characteristic functions, moments and cumulants

A1.2.2.3 Relationship between cumulants and moments

A1.2.2.4 Properties of cumulants

A1.2.2.5 Cumulants of complex random variables

A1.2.2.6 Circular complex random variables

A1.2.3 Gaussian distribution

A1.2.3.1 Case of a scalar Gaussian variable

A1.2.3.2 Characteristic functions and HOS

A1.2.3.3 Case of a Gaussian random vector

A1.3. Discrete-time random signals

A1.3.1 Second-order statistics

A1.3.2 Stationary and ergodic random signals

A1.3.3 Higher order statistics of random signals. A1.3.3.1 Cumulants of real random signals

A1.3.3.2 Polyspectra

A1.3.3.3 Cumulants of complex random signals

A1.3.3.4 Case of complex circular random signals

A1.4. Application to system identification

A1.4.1 Case of linear systems

A1.4.2 Case of homogeneous quadratic systems

References

Index

WILEY END USER LICENSE AGREEMENT

Отрывок из книги

Matrices and Tensors with Signal Processing Set

.....

See Figure I.2 for a third-order tensor, and Chapter 5 for a detailed presentation.

This decomposition has been used to solve the tensor completion problem (Grasedyck et al. 2015; Bengua et al. 2017), for facial recognition (Brandoni and Simoncini 2020) and for modeling MIMO communication channels (Zniyed et al. 2020), among many other applications. A brief description of the TT decomposition is given in section 3.13.4 using the mode-(p, n) product. Note that a specific SVD-based algorithm, called TT-SVD, was proposed by Oseledets (2011) for computing a TT decomposition.

.....

Добавление нового отзыва

Комментарий Поле, отмеченное звёздочкой  — обязательно к заполнению

Отзывы и комментарии читателей

Нет рецензий. Будьте первым, кто напишет рецензию на книгу Matrix and Tensor Decompositions in Signal Processing
Подняться наверх