This paper was converted on www.awesomepapers.org from LaTeX by an anonymous user.
Want to know more? Visit the Converter page.

Unsupervised Motor Imagery Saliency Detection Based on Self-Attention Mechanism

Navid Ayoobi and Elnaz Banan Sadeghian Navid Ayoobi and Elnaz Banan Sadeghian are with the Department of Electrical and Computer Engineering, Stevens Institute of Technology, Hoboken, USA. (E-mail: nayoobi@stevens.edu,ebsadegh@stevens.edu)
Abstract

Detecting the salient parts of motor-imagery electroencephalogram (MI-EEG) signals can enhance the performance of the brain-computer interface (BCI) system and reduce the computational burden required for processing lengthy MI-EEG signals. In this paper, we propose an unsupervised method based on the self-attention mechanism to detect the salient intervals of MI-EEG signals automatically. Our suggested method can be used as a preprocessing step within any BCI algorithm to enhance its performance. The effectiveness of the suggested method is evaluated on the most widely used BCI algorithm, the common spatial pattern (CSP) algorithm, using dataset 2a from BCI competition IV. The results indicate that the proposed method can effectively prune MI-EEG signals and significantly enhance the performance of the CSP algorithm in terms of classification accuracy.

I INTRODUCTION

Brain-computer interface (BCI) bypasses the need for peripheral nervous and neuromuscular systems for communicating with surroundings and enables individuals to control external devices by directly translating neurophysiological brain signals into commands [1, 2]. Among all signal acquisition methods in BCI, electroencephalography is employed in the vast majority of non-invasive systems. Further, motor imagery electroencephalogram (MI-EEG) patterns captured when an individual imagines performing a movement are often used as the most discriminatory patterns for classification. However, these MI-EEG signals have a poor signal-to-noise ratio (SNR). Moreover, numerous forms of artifacts like electromyogram (EMG), power line interference, electrooculogram (EOG), eye movement, and blinks decline the SNR even further [3]. Due to the low SNR, unprocessed MI-EEG signals result in poor detection and classification performance. Therefore, enhancing SNR is necessary for developing a reliable BCI system.

Artifact removal is one of the most popular preprocessing methods to increase the SNR of EEG signals. In [4], a method is proposed based on blind source separation to remove EMG artifacts from EEG data with a limited number of electrodes. Chavez et al. in [5] adopt the wavelet decomposition and a resampling method to detect and remove ocular and muscular artifacts from a single-channel EEG across different scales. However, these methods attempt to remove the artifacts from the entire EEG signal where the user performance can vary while performing an MI task varies during a trial. Therefore, using the entire trial is not optimally suitable for classification and degrades the classification accuracy. Moreover, processing a lengthy EEG is computationally inefficient. Numerous efforts have been conducted to address this issue by using a time window to detect the most salient interval in a trial for MI classification [1, 6]: Gaur et al. in [1] use a 22-second sliding window and split the EEG signal into nine intervals where the CSP algorithm is used to extract features and nine linear discriminant analysis (LDA) classifiers are trained separately for each individual interval. The longest consecutive repetition is then used to choose the best label among the nine labels generated by different LDAs. Hsu et al. in [6] propose a method based on continuous wavelet transform and Student’s two-sample t-statistics to detect one salient interval within every 44-second trial using a 11-second searching window. However, these approaches do not detect the entirety of salient intervals which can be several and of varying lengths. Furthermore, they require labeled data, and their effectiveness depends on the selected classifier’s ability to separate them.

This paper proposes a method based on the self-attention mechanism [7] to automatically extract several salient intervals in every MI-EEG trial. Further, the method is unsupervised and therefore it does not require any labeled data. The proposed method can be applied to MI-EEG signals as a preprocessing step within any BCI algorithm to improve the SNR and enhance its performance. The proposed architecture is comprised of an input embedding layer, an encoder-decoder network, a self-attention layer, and a dense layer. We utilize the encoder-decoder structure to reconstruct the input, yielding an unsupervised architecture. The self-attention layer assigns a weight to each input time sample and aids the decoder to precisely reconstruct the input based on those time samples with the highest weights representing the salient parts of each trial. Consequently, the proposed method can detect different numbers of salient intervals with varying lengths. After the training phase, the decoder part is removed from the network. The trained encoder and self-attention layer is then used to extract salient intervals of unseen testing trials.

To evaluate our approach, we use dataset 2a from BCI competition IV to extract the salient intervals of the MI trials. We compare the performance of the most widely used algorithm in BCI, the common spatial pattern (CSP) [8] algorithm, with and without our proposed method. The results show that the proposed method can effectively extract the salient intervals of MI-EEG signals and significantly improve the CSP performance in terms of classification accuracy.

The remainder of this article is organized as follows. Section II introduces the dataset. Section III elaborates the proposed method and architecture. Section IV presents the simulation results and discussion. Section V concludes this article.

II Dataset

To evaluate our proposed method, we use dataset 2a from BCI competition IV [9] that contains the EEG data of nine healthy subjects. This dataset consists of movement imaginations of left and right hands, feet, and tongue. The EEG signals are recorded using twenty-two Ag/AgCl electrodes and sampled at 250250 Hz. The timing scheme of each trial is depicted in Fig. 1. In this paper, we only utilize the left-hand and right-hand trials. We also extract all our trials from the second 22 after the cue to the second 66.

Refer to caption
Figure 1: Timing scheme of each trial in dataset 2a from BCI competition IV.

III Methods

In this section, we elaborate on our proposed unsupervised method for detecting the salient parts of EEG trials to improve classification accuracy. The suggested architecture consists of four main parts: input embedding, long short-term memory (LSTM) encoder-decoder, self-attention, and dense layer. The encoder-decoder structure helps the self-attention layer learn the salient parts of the EEG trials in an unsupervised method by reconstructing the input. The proposed architecture is shown in Fig. 2.

III-A The proposed architecture

Input embedding: Given a multivariate signal as input, the embedding layer captures the dependencies between different channels [10] in EEG signals and creates a representation vector for each time sample. We employ a 1D convolutional layer with mm numbers of dd-dimensional kernels. This layer produces an mm-dimensional vector for each time sample by looking at current and d1d{-}1 next time samples. We utilize zero padding to ensure that the input and output sequences have the same length.

LSTM encoder: LSTM as a variant of recurrent neural networks can capture long temporal dependencies in sequential data [11]. We employ LSTM units to build our encoder and decoder. The encoder processes the input sequence of length TT and summarizes the temporal information of each time sample in two hh-dimensional vectors, the hidden state heh_{e} and the cell state cec_{e}. The produced hidden states are then fed to a self-attention layer. After TT recurrences, the whole sequence is summarized in final encoder hidden state heTh_{eT} and cell state ceTc_{eT}. These two vectors are fed to the decoder’s initial hidden state hd0h_{d0} and cell state cd0c_{d0}.

Refer to caption
Figure 2: The block diagram of the proposed method.

Self-attention: Attention is originally proposed in [7] to improve the performance of an encoder-decoder in a machine translation. It alleviates the bottleneck problem of using the fixed-length vector from which the decoder produces the output sentence by allowing the model to automatically seek source sentence parts that are significant for predicting a target word. Self-attention, also known as intra-attention, is an attention mechanism that links distinct points in a single sequence to calculate a representation of the sequence [12]. In this work, we adopt the self-attention mechanism to detect the salient parts of each MI-EEG trial. In our self-attention layer, the ii-th encoder hidden state hei={hei,1,,hei,h}h_{ei}{=}\{h_{ei,1},...\,,h_{ei,h}\} is mapped to an nkn_{k}-dimensional query vector qiq_{i}, an nkn_{k}-dimensional key vector kik_{i}, and an nvn_{v}-dimensional value vector viv_{i} by multiplying heih_{ei} with three matrices WqW_{q}, WkW_{k} and WvW_{v}, respectively:

qi=Wqheiq_{i}=W_{q}h_{ei} (1)
ki=Wkheik_{i}=W_{k}h_{ei} (2)
vi=Wvheiv_{i}=W_{v}h_{ei} (3)

These vectors are used to calculate a context vector in each position in the sequential data. Here, dot-product attention [12] is used to compute the context vectors. The context vector at ii-th position is a weighted sum of value vectors in all positions

ci=t=1Tαi,tvt,c_{i}=\sum_{t=1}^{T}\alpha_{i,t}v_{t}, (4)

where the attention weight αi,t\alpha_{i,t} measures how well the ii-th encoder hidden state aligns with the encoder hidden state at position tt. The attention weights are computed as the inner product of the query with the corresponding key. After computing the weights associated with all keys, a softmax function is used to make the weights sum up to one. If all query vectors, key vectors, and value vectors are packed together into 𝐐\mathbf{Q}, 𝐊\mathbf{K}, and 𝐕\mathbf{V} matrices, the context matrix 𝐂\mathbf{C} is calculated as follows

𝐂=softmax(𝐐𝐊T)𝐕.\mathbf{C}=softmax(\mathbf{Q}\mathbf{K}^{T})\mathbf{V}. (5)

LSTM decoder and dense layer: The LSTM decoder accepts the final hidden state and cell state of the encoder as its initial hidden state and cell state. The context vectors generated by the self-attention layer are fed to the decoder inputs. The decoder outputs are then passed through a dense layer to increase the dimensionality of decoder outputs to the number of EEG channels. The decoder and dense layer learn to reconstruct the original MI trial by compelling the self-attention layer to pay more attention to certain parts of the MI trial that are more significant for reconstructing the input.

III-B Training and Testing stages

Training stage: To improve the performance of the proposed network in reconstructing the input EEG, xx, we randomly select p1p_{1} time samples of xx and set p2p_{2} feature values to zero to obtain x~\tilde{x}. In the training stage, x~\tilde{x} is used as the training trial, and its corresponding original EEG signal xx is used as the ground truth. The mean squared error is used as the loss function

Q=1Tnct=1Ti=1nc(x^t(i)xt(i))2,Q=\frac{1}{Tn_{c}}\sum_{t=1}^{T}\sum_{i=1}^{n_{c}}(\hat{x}_{t}^{(i)}-x_{t}^{(i)})^{2}, (6)

where ncn_{c}, x^\hat{x} and xx are the number of EEG channels, the reconstructed EEG, and the original EEG, respectively.

Testing stage: The decoder and dense layer are removed from the network for the testing stage. A test trial is passed through the embedding layer, the encoder, and the self-attention layer. The computed attention weights {α}\{\alpha\} are used to detect the salient parts of the test trial. Assume that the attention weights of a test trial are represented in a matrix ΛT×T\Lambda_{T\times T}. The attention at position tt, ata_{t}, is computed as follows

at=1Tj=1TΛj,t.a_{t}=\frac{1}{T}\sum_{j=1}^{T}\Lambda_{j,t}. (7)

The computed attentions form an attention vector A=[a1,,aT]A=[a_{1},\,...\,,a_{T}]. We segment the attention vector into nn intervals for separating salient intervals. rr out of nn intervals with the highest average attention are selected. These two hyperparameters are tuned in the cross validation procedure. The time samples corresponding to the selected attention intervals are then concatenated to form the pruned trial.

IV Results and discussion

Refer to caption
Figure 3: Comparison of the performance of the CSP algorithm for different subjects in the dataset before and after applying our proposed method in terms of classification accuracy.

In all our experiments, the MI-EEG signals are bandpass filtered between 44 Hz and 4040 Hz with a fifth-order Butterworth filter. For all subjects, the number of kernels mm, kernel dimension dd, LSTM hidden state dimension hh, key-query dimension nkn_{k}, the dimension of the value vector nvn_{v}, parameter p1p_{1}, and parameter p2p_{2} are set to 55, 44, 44, 44, 44, 0.60.6, and 0.40.4, respectively. A deep neural network with three hidden layers (with 5,10, and 15 nodes) is used for the dense layer. The parameters rr and nn are tuned by 5-fold cross validation for each subject. In addition, we use three pairs of spatial filters in the CSP algorithm [8].

IV-A The effect of the proposed method on the classification performance

To evaluate the proposed method, we compare the performance of the CSP algorithm under two scenarios: 1) with and 2) without applying our method to the MI-EEG trials.

In the first scenario, we train our proposed architecture with the 44-second training trials. After the training stage, we use the encoder and self-attention layer to extract the salient intervals from the training and testing trials. The spatial filters of the CSP algorithm are then designed using the pruned training trials. The accuracy of the CSP algorithm is computed using the pruned testing trials. In the second scenario, the 44-second training trials and testing trials are directly used to design the spatial filters and obtain the accuracy of the CSP algorithm, respectively. In both scenarios, linear discriminant analysis is used as the classifier.

The results are shown in Fig. 3 for all the subjects. We observe that the classification accuracy is improved approximately by 14.2%14.2\%, 19.4%19.4\%, 6.6%6.6\%, 31.9%31.9\%, 24.7%24.7\%, 9.2%9.2\%, 10.4%10.4\%, 1.5%1.5\%, and 7.4%7.4\% for subject one through nine, respectively. On average, the proposed method enhances the classification accuracy of the CSP by about 13.9%13.9\% across all subjects.

In addition, the ratio of the length of the pruned trials to the length of the unpruned trials is 0.70.7, 0.650.65, 0.450.45, 0.70.7, 0.550.55, 0.70.7, 0.750.75, 0.750.75, and 0.750.75 for subject 1 through 9, respectively. Hence, the proposed method reduces the computational burden required for processing EEG signals and increases the classification accuracy simultaneously.

IV-B The effect of segment length on attention vector splitting

This section evaluates the effect of different segment lengths T/nT/n used for splitting the attention vector to extract the salient intervals. Fig. 4 compares the results in terms of classification accuracy for subject one in the dataset versus the number of time samples in each interval, T/nT/n. rr intervals with the highest average attention are concatenated to form a trial containing =rT/n=750\ell{=}rT/n{=}750 (red dashed line), =550\ell{=}550 (green dashed line), =350\ell{=}350 (orange dashed line), and =250\ell{=}250 (blue dashed line) time samples. The black horizontal line shows the classification accuracy of the CSP algorithm without our method. We observe that restricting the concatenated signal to have a very short length (=350\ell{=}350 and =250\ell{=}250) results in lower accuracy compared to the CSP method alone. As shown the best curves correspond to =750\ell{=}750 and =550\ell{=}550. These curves prove that using the entire lengths of the trials does not always achieve the best obtainable accuracy.

In addition, the results suggest that shorter segment lengths T/nT/n can generally provide higher accuracy than longer segment lengths. In other words, using multiple time windows instead of a single time window [1, 6] results in detecting more salient intervals in a trial. The proposed method may be employed as a prepossessing step in MI-BCI studies for enhancing motor imagery classification.

Refer to caption
Figure 4: The effect of different segment lengths on attention vector splitting for subject one.

V Conclusion

In this paper, we present an unsupervised method based on the self-attention mechanism to automatically detect the salient intervals of MI-EEG trials. We test our proposed method using the CSP algorithm on the dataset 2a from BCI competition IV. The results show that pruning the MI-EEG trials by the proposed method can effectively improve the SNR and reduce the computational burden in BCI systems. The results also show that the average classification accuracy of the CSP algorithm is improved by approximately 13.9%13.9\% across all subjects. The suggested approach may be used as a preprocessing step before any BCI algorithm to improve its performance.

References

  • [1] P. Gaur, H. Gupta, A. Chowdhury, K. McCreadie, R. B. Pachori, and H. Wang, “A sliding window common spatial pattern for enhancing motor imagery classification in EEG-BCI,” IEEE Transactions on Instrumentation and Measurement, vol. 70, pp. 1–9, 2021.
  • [2] E. Banan Sadeghian and M. H. Moradi, “Continuous detection of motor imagery in a four-class asynchronous BCI,” in 2007 29th Annual International Conference of the IEEE Engineering in Medicine and Biology Society, 2007, pp. 3241–3244.
  • [3] Q. Wei, S. Zhu, Y. Wang, X. Gao, H. Guo, and X. Wu, “Maximum signal fraction analysis for enhancing signal-to-noise ratio of EEG signals in SSVEP-based BCIs,” IEEE Access, vol. 7, pp. 85 452–85 461, 2019.
  • [4] L. Zou, X. Chen, G. Dang, Y. Guo, and Z. J. Wang, “Removing muscle artifacts from EEG data via underdetermined joint blind source separation: A simulation study,” IEEE Transactions on Circuits and Systems II: Express Briefs, vol. 67, no. 1, pp. 187–191, 2019.
  • [5] M. Chavez, F. Grosselin, A. Bussalb, F. D. V. Fallani, and X. Navarro-Sune, “Surrogate-based artifact removal from single-channel EEG,” IEEE Transactions on Neural Systems and Rrehabilitation Engineering, vol. 26, no. 3, pp. 540–550, 2018.
  • [6] W.-Y. Hsu, C.-C. Lin, M.-S. Ju, and Y.-N. Sun, “Wavelet-based fractal features with active segment selection: Application to single-trial EEG data,” Journal of Neuroscience Methods, vol. 163, no. 1, pp. 145–160, 2007.
  • [7] D. Bahdanau, K. Cho, and Y. Bengio, “Neural machine translation by jointly learning to align and translate,” arXiv preprint arXiv:1409.0473, 2014.
  • [8] B. Blankertz, R. Tomioka, S. Lemm, M. Kawanabe, and K.-R. Muller, “Optimizing spatial filters for robust EEG single-trial analysis,” IEEE Signal Processing Magazine, vol. 25, no. 1, pp. 41–56, 2007.
  • [9] C. Brunner, R. Leeb, G. Müller-Putz, A. Schlögl, and G. Pfurtscheller, “BCI competition 2008–graz data set a,” Institute for Knowledge Discovery (Laboratory of Brain-Computer Interfaces), Graz University of Technology, vol. 16, pp. 1–6, 2008.
  • [10] H. Song, D. Rajan, J. J. Thiagarajan, and A. Spanias, “Attend and diagnose: Clinical time series analysis using attention models,” in Thirty-Second AAAI Conference on Artificial Intelligence, 2018.
  • [11] K. Greff, R. K. Srivastava, J. Koutník, B. R. Steunebrink, and J. Schmidhuber, “LSTM: A search space odyssey,” IEEE Transactions on Neural Networks and Learning Systems, vol. 28, no. 10, pp. 2222–2232, 2016.
  • [12] A. Vaswani, N. Shazeer, N. Parmar, J. Uszkoreit, L. Jones, A. N. Gomez, Ł. Kaiser, and I. Polosukhin, “Attention is all you need,” in Advances in Neural Information Processing Systems, 2017, pp. 5998–6008.