Deep learning for electroencephalogram (EEG) classification tasks: a review
Paper link : [PDF] Deep learning for electroencephalogram (EEG) classification tasks: a review | Semantic Scholar
Introduction
EEG classification attempts to address critical questions
DL로 어떤 classification task가 연구되었는지
DL 학습할 때 사용된 입력 form
특정 task에 맞는 특정 DL 구조가 있는지
Methods
PRISMA Study selection
Review 할 Paper 를 선정한 기준은 다음과 같다.
Review 논문들은 위와 같은 PRISMA Study selection 구조를 많이 사용한다고 함
Result 와 Discussion이 중요하다고 생각
Results
Result_1. EEG classification tasks explored with deep learning
Result_2. Preprocessing methods
Result_3. Input formulations for deep learning
Needs
- event related potential & sleep stage scoring tasks
only used images as inputs
Result_4. Deep learning architecture trends (ver.2019)
Activation Function
- Convolutional layers
: ReLU (70%), ELU (8%), leakyReLU (8%), hyperbolic tangent(5%)
- Fully-connected layers
- non-classifier : sigmoid
- classifier : softmax
-non-classifier AE (Auto-Encoders) layer
- Sigmoid vs ReLU
Needs
Research about
the most effective activation function for SAE
ERP는 CNN을 가장 많이 사용
Seizure Detection은 DBN 사용 안 함
Sleep Stage Scoring이 Hybrid 가장 많이 사용
Result_5. Case studies on a shared dataset
DEAP _ common emotion recognition dataset 을 사용한 studies 를 모아서 비교
* PSD: power spectral density
이 데이터 세트의 경우 보고된 가장 효과적인 아키텍처는 DBN, CNN 및 RNN (Emotion Recognition)
In 65번 Study,
더 복잡한 아키텍처가 더 높은 분류 비율로 이어질 것이라 생각하기 쉬운데, RNN 단독(컨볼루션 계층 없음)을 사용한 아키텍쳐가 성능이 우수함 → 고정관념에 갇히면 안 됨
Discussion
contents
Discussion_1. Deep learning structures for specific types of tasks
About Task
- Mental Workload
- Standard CNN < Standard DBN
- Hybrid CNN > Standard DBN
- Motor Imagery
- CNN vs DBN
- ERP
- SAE < DBN
- DBN vs CNN
- Emotion Recognition
- - DBN, CNN, RNN 87% ~ 89%
- Seizure Detection
- - RNN 100%
- - CNN 99%
- - Need to research DBN
Discussion_2. Architecture design and input formulation
About DL Strategy
DBN
- three RBMs
- End classifier
: Single dense layer
CNN
- 1-2 Fully Connected Layers
Needs
- optimize the strategy to use images as CNN inputs
RNN
- 2 LSTM Layers
- Images
- Not spectrogram
- 2D/3D Grid
Needs
- the most effective EEG input form for RNN
Discussion_3. Hybrid architectures
About 10 studies,
LSTM RNN modules > Standard LSTM, RNN, GRU RNN Modules
The addtion of non-convolutional layers to standard CNN
The channel-wise hybrid > the rest
for cross-subject classification
-> effective for transfer learning of EEG analysis
Conclusions
- CNN’s, RNN’s, and DBN’s outperformed other types of deep networks,
such as SAE’s and MLPNN’s. - Hybrid designs
convolutional layers with recurrent layers or RBM
showed promise in classification accuracy and transfer learning. - more research of the combinations
particularly the number and arrangement of different layers
including RBM’s, recurrent layers, convolutional layers, and FC layers.
In Korean,
CNN, RNN, DBN이 SAE, MLPNN같은 다른 DL networks, 보다 성능이 전반적으로 우수
컨볼루션 계층 with 반복 계층 또는 제한된 Boltzmann 머신과 통합한 하이브리드 설계는
표준 설계와 비교할 때 분류 정확도 및 전이 학습에서 가능성을 보임
다양한 레이어의 수와 배열의 조합에 대해 연구가 필요하다
Limitation and Needs
Limitation
- New Models like Generative, Transformer
- Not mention about EEGNet (most famous Network in EEG domain)
Needs
- ERP & sleep stage scoring tasks only used images as inputs
- the most effective activation function for SAE (sigmoid vs ReLU)
- Seizure Detection : research DBN
- optimize the strategy to use images as CNN inputs
- the most effective EEG input form for RNN