Читать книгу EEG Signal Processing and Machine Learning - Saeid Sanei - Страница 77
4.7 Coherency, Multivariate Autoregressive Modelling, and Directed Transfer Function
ОглавлениеIn some applications such as in detection and classification of finger movement, it is very useful to find out how the associated movement signals propagate within the neural network of the brain. As will be shown in Chapter 16, there is a consistent movement of the source signals from the occipital to temporal regions. It is also clear that during the mental tasks different regions within the brain communicate with each other. The interaction and cross‐talk among the EEG channels may be the only clue to understanding this process. This requires recognition of the transient periods of synchrony between various regions in the brain. These phenomena are not easy to observe by visual inspection of the EEGs. Therefore, some signal processing techniques have to be used in order to infer such causal relationships. One time series is said to be causal to another if the information contained in that time series enables the prediction of the other time series.
The spatial statistics of scalp EEG are usually presented as coherence in individual frequency bands, these coherences result both from correlations among neocortical sources and volume conduction through the tissues of the head, i.e. brain, cerebrospinal fluid, skull, and scalp. Therefore, spectral coherence [28] is a common method for determining the synchrony in EEG activity. Coherency is given as:
(4.83)
Figure 4.10 Cross‐spectral coherence for a set of three electrode EEGs, one second before the right‐finger movement. Each block refers to one electrode. By careful inspection of the figure, it is observed that the same waveform is transferred from Cz to C3.
where is the Fourier transform of the cross‐correlation coefficients between channel i and channel j of the EEGs. Figure 4.10 shows an example of the cross‐spectral coherence around one second prior to finger movement. A measure of this coherency, such as an average over a frequency band, is capable of detecting zero time‐lag synchronization and fixed time non‐zero time‐lag synchronization, which may occur when there is a significant delay between the two neuronal population sites [29]. However, it does not provide any information on the directionality of the coupling between the two recording sites.
Granger causality (also called as Wiener–Granger causality) [30] is another measure, which attempts to extract and quantify the directionality from EEGs. Granger causality is based on bivariate AR estimates of the data. In a multichannel environment this causality is calculated from pair‐wise combinations of electrodes. This method has been used to evaluate the directionality of the source movement from the local field potential in the visual system of cats [31].
For multivariate data in a multichannel recording, however, application of the Granger causality is not computationally efficient [31, 32]. The directed transfer function (DTF) [33], as an extension of Granger causality, is obtained from multichannel data and can be used to detect and quantify the coupling directions. The advantage of the DTF over spectral coherence is that it can determine the directionality in the coupling when the frequency spectra of the two brain regions have overlapping spectra. The DTF has been adopted by some researchers for determining the directionality in the coupling [34, 35] since it has been demonstrated that [36] there is a directed flow of information or cross‐talk between the sensors around the sensory motor area before finger movement. The DTF is based on fitting the EEGs to an MVAR model. Assuming that x(n) is an M‐channel EEG signal, it can be modelled in vector form as:
where n is the discrete‐time index, p is the prediction order, v(n) is zero‐mean noise, and L k is generally an M × p matrix of prediction coefficients. A similar method to the Durbin algorithm for single‐channel signals, namely the Levinson–Wiggins–Robinson (LWR) algorithm is used to calculate the MVAR coefficients [14]. The Akaike information criterion (AIC) [37] is also used for the estimation of prediction order p. By multiplying both sides of the above equation (Eq. 4.84) by xT (n − k) and performing the statistical expectation the A set of Yule–Walker equation is obtained as [38]:
(4.85)
where R(q) = E[x(n)xT (n + q)] is the covariance matrix of x(n), and the cross‐correlations of the signal and noise are zero since they are assumed uncorrelated. Similarly, the noise autocorrelation is zero for non‐zero shift since the noise samples are uncorrelated. The data segment is considered short enough for the signal to remain statistically stationary within that interval and long enough to enable accurate measurement of the prediction coefficients. Given the MVAR model coefficients, a multivariate spectrum can be achieved. Here it is assumed that the residual signal, v(n), is white noise. Therefore,
(4.86)
where
and L(0) = I. Rearranging the above equation (Eq. 4.87) and replacing noise by σv 2I yields
(4.88)
which represents the model spectrum of the signals or the transfer matrix of the MVAR system. The DTF or causal relationship between channel i and channel j can be defined directly from the transform coefficients [32] given by:
(4.89)
Electrode i is causal to j at frequency f if:
(4.90)
A time‐varying DTF can also be generated (mainly to track the source signals) by calculating the DTF over short windows to achieve the short‐time DTF (SDTF) [32].
As an important feature in classification of left and right‐finger movements, or tracking the mental task related sources, SDTF plays an important role. Some results of using SDTF for detection and classification of finger movement have been given in the context of BCI.