Читать книгу EEG Signal Processing and Machine Learning - Saeid Sanei - Страница 64

4.3 Nonstationarity

Оглавление

Nonstationarity of the signals can be quantified by evaluating the changes in signal distribution over time. In a strict‐sense stationary process the signal distribution remains the same for different time intervals. However, often wide‐sense stationarity is not required and therefore, it is sufficient to have various statistics such as mean and variance fixed (or without significant change) over time.

Although in many applications the multichannel EEG distribution is considered as multivariate Gaussian, the mean and covariance properties generally change from segment to segment. As such EEGs are considered stationary only within short intervals and are generally quasi‐stationary. Although this interval can change due to the rapid changes in the brain state such as going from closed eye to open eye, sleep to wakefulness, normal to seizure, change in alertness, brain responding to a stimulation in the form of event‐related potential (ERP) and evoked potential (EP) signals, eye blinking, and emotion change, in practise, a 10 seconds window of EEG is considered stationary.

The change in the distribution of the signal segments can be measured in terms of both the parameters of a Gaussian process and the deviation of the distribution from Gaussian. The non‐Gaussianity of the signals can be checked by measuring or estimating some higher‐order moments such as skewness, kurtosis, negentropy, and Kullback–Leibler (KL) distance (aka KL divergence).

Skewness is a measure of asymmetry, or more precisely, the lack of symmetry in distribution. A distribution is symmetric if it looks the same to the left and right of its midline or mean point. The skewness is defined for a real signal as

(4.1)

where μ and σ are respectively, the mean and variance and E denotes statistical expectation. If the distribution is more to the right of the mean point the skewness is negative, and vice versa. For a symmetric distribution such as Gaussian, the skewness is zero.

Kurtosis is a measure for showing how peaked or flat a distribution is relative to a normal distribution. That is, datasets with high kurtosis tend to have a distinct peak near the mean, decline rather rapidly, and have heavy tails. Datasets with low kurtosis tend to have a flat top near the mean rather than a sharp peak. A uniform distribution would be the extreme case. The kurtosis for a signal x(n) is defined as:

(4.2)

where mi (x(n)) is the ith central moment of the signal x(n), i.e. mi (x(n)) = E[(x(n) − μ) i ]. The kurtosis for signals with normal distributions is three. Therefore, an excess or normalized kurtosis is often used and defined as

(4.3)

which is zero for Gaussian distributed signals. Often the signals are considered ergodic, hence the statistical averages can be assumed identical to time averages so that they can be estimated with time averages.

The negentropy of a signal x(n) [11] is defined as:

(4.4)

where x Gauss(n) is a Gaussian random signal with the same covariance as x(n) and H(.) is the differential entropy [12] defined as:

(4.5)

and p(x(n)) is the signal distribution. Negentropy is always nonnegative.

Entropy, by itself, is an important measure of EEG behaviour particularly in the cases in which the brain synchronization changes such as when brain waves become gradually more synchronized when the brain approaches the seizure onset. It is also a valuable indicator of other neurological disorders presented in psychiatric diseases.

By replacing the probability density function (pdf) with joint or conditional pdfs in Eq. (4.5), joint or conditional entropy is defined respectively. In addition, there are new definitions of entropy catering for neurological applications such as the multiscale fluctuation‐based dispersion entropy defined in [13], which is briefly explained in this chapter, and those in the references herein.

The KL distance between two distributions p 1 and p 2 is defined as:

(4.6)

It is clear that the KL distance is generally asymmetric, therefore by changing the position of p 1 and p 2 in this Eq. (4.6) the KL distance changes. The minimum of the KL distance occurs when p1 (z) = p2 (z).

EEG Signal Processing and Machine Learning

Подняться наверх