Brain/Paper Review & Summation

Deep learning for electroencephalogram (EEG) classification tasks: a review

First-Penguin 2023. 5. 12. 00:41
728x90

Paper link : [PDF] Deep learning for electroencephalogram (EEG) classification tasks: a review | Semantic Scholar

Introduction

 

EEG classification attempts to address critical questions

-which EEG classification tasks have been explored with deep learning?
-What input formulations have been used for training the deep networks?
-Are there specific DL network structures suitable for specific types of tasks?

DL로 어떤 classification task가 연구되었는지

DL 학습할 때 사용된 입력 form

특정 task에 맞는 특정 DL 구조가 있는지

 

 


Methods

 

PRISMA Study selection

 

Review 할 Paper 를 선정한 기준은 다음과 같다.

-EEG only : without ECG, EMG, videos
-Task Classification
-Deep learning : at least two hidden layers
-Time : past five years
 

Review 논문들은 위와 같은 PRISMA Study selection 구조를 많이 사용한다고 함

 


Result Discussion이 중요하다고 생각

Results

1.EEG classification tasks explored with deep learning
2.Preprocessing methods
3.Input formulations for deep learning
4.Deep learning architecture trends (ver. 2019)
5.Case studies on a shared dataset

 

 


Result_1. EEG classification tasks explored with deep learning

- Emotion recognition tasks
- Motor imagery tasks
- Mental workload tasks
- Seizure detection tasks
- Sleep stage scoring tasks
- Event related potential tasks

Result_2. Preprocessing methods

 

 

- Artifact removal and filtering strategies
Frequency range used in EEG analysis


Result_3. Input formulations for deep learning

(A) Input formulations across all studies

 

Calculated Features 용어
- Power spectral density (PSD)
- wavelet decomposition
- statistical measures

(B) General input formulation compared across different tasks

Needs

- event related potential & sleep stage scoring tasks
   only used images as inputs

 


Result_4. Deep learning architecture trends (ver.2019)

 

- CNN (43%)
- DBN (18%)
- Hybrid _ CNN / MLP (12%)
- RNN (10%)
- MLPNN (9%)
- SAE(stacked auto-encoders) (8%)
 

 

Activation Function

- Convolutional layers

     : ReLU (70%), ELU (8%), leakyReLU (8%), hyperbolic tangent(5%)

 

- Fully-connected layers

  • non-classifier : sigmoid
  • classifier : softmax

-non-classifier AE (Auto-Encoders) layer

  • Sigmoid vs ReLU
 

Needs

Research about 

the most effective activation function for SAE

 

 

 

ERPCNN을 가장 많이 사용

Seizure DetectionDBN 사용 안 함

Sleep Stage ScoringHybrid 가장 많이 사용

 


Result_5. Case studies on a shared dataset

DEAP _ common emotion recognition dataset 을 사용한 studies 를 모아서 비교

 

* PSD: power spectral density

 

이 데이터 세트의 경우 보고된 가장 효과적인 아키텍처는 DBN, CNN RNN (Emotion Recognition)

 

In 65번 Study, 

더 복잡한 아키텍처가 더 높은 분류 비율로 이어질 것이라 생각하기 쉬운데, RNN 단독(컨볼루션 계층 없음)을 사용한 아키텍쳐가 성능이 우수함 → 고정관념에 갇히면 안 됨

 

 


Discussion

contents

1. Deep learning structures for specific types of tasks
2. Architecture design and input formulation
3. Hybrid architectures

 

-Sleep Scoring applications are not included.

 


Discussion_1. Deep learning structures for specific types of tasks

 

Task-specific deep learning recommendation diagram

 

About Task

  • Mental Workload
    • Standard CNN < Standard DBN
    • Hybrid CNN > Standard DBN
  • Motor Imagery
    • CNN vs DBN
  • ERP
    • SAE < DBN
    • DBN vs CNN
  • Emotion Recognition
    • - DBN, CNN, RNN 87% ~ 89%
  • Seizure Detection
    • - RNN 100%
    • - CNN 99%
    • - Need to research DBN

 


Discussion_2. Architecture design and input formulation

About DL Strategy

DBN

  • three RBMs
  • End classifier
    : Single dense layer

CNN

  • 1-2 Fully Connected Layers

Needs

  • optimize the strategy to use images as CNN inputs
 

RNN

  • 2 LSTM Layers
  • Images
  • Not spectrogram
  • 2D/3D Grid

Needs

  • the most effective EEG input form for RNN

 


Discussion_3. Hybrid architectures

 

About 10 studies,

LSTM RNN modules > Standard LSTM, RNN, GRU RNN Modules

 

The addtion of non-convolutional layers to standard CNN

 

The channel-wise hybrid > the rest

for cross-subject classification

-> effective for transfer learning of EEG analysis


Conclusions

  • CNN’s, RNN’s, and DBN’s outperformed other types of deep networks,
    such as SAE’s and MLPNN’s.
  • Hybrid designs
    convolutional layers with recurrent layers or RBM
    showed promise in classification accuracy and transfer learning.
  • more research of the combinations
    particularly the number and arrangement of different layers
    including RBM’s, recurrent layers, convolutional layers, and FC layers.

 

In Korean,

CNN, RNN, DBNSAE, MLPNN같은 다른 DL networks, 보다 성능이 전반적으로 우수

컨볼루션 계층 with 반복 계층 또는 제한된 Boltzmann 머신과 통합한 하이브리드 설계는
표준 설계와 비교할 때 분류 정확도 및 전이 학습에서 가능성을 보임

다양한 레이어의 수와 배열의 조합에 대해 연구가 필요하다

 


Limitation and Needs

Limitation

  • New Models like Generative, Transformer
  • Not mention about EEGNet (most famous Network in EEG domain)

 

Needs

  • ERP & sleep stage scoring tasks only used images as inputs
  • the most effective activation function for SAE (sigmoid vs ReLU)
  • Seizure Detection : research DBN
  • optimize the strategy to use images as CNN inputs
  • the most effective EEG input form for RNN